Test Report: Docker_Linux_crio_arm64 17314

                    
                      720b04249cd58de6fa013ef84ee34e212d9c3117:2023-10-06:31319
                    
                

Test fail (7/302)

Order failed test Duration
28 TestAddons/parallel/Ingress 168.6
159 TestIngressAddonLegacy/serial/ValidateIngressAddons 179.37
209 TestMultiNode/serial/PingHostFrom2Pods 4.32
230 TestRunningBinaryUpgrade 77.08
233 TestMissingContainerUpgrade 146.89
245 TestStoppedBinaryUpgrade/Upgrade 74.2
256 TestPause/serial/SecondStartNoReconfiguration 90.46
x
+
TestAddons/parallel/Ingress (168.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-891734 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-891734 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-891734 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [20c72671-3250-4814-8b9e-706b5944bd3e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [20c72671-3250-4814-8b9e-706b5944bd3e] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.012746686s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-891734 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-891734 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.489470571s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-891734 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-891734 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.049121632s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-891734 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-891734 addons disable ingress-dns --alsologtostderr -v=1: (1.375665514s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-891734 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-891734 addons disable ingress --alsologtostderr -v=1: (7.836494509s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-891734
helpers_test.go:235: (dbg) docker inspect addons-891734:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1bf82584606a26622e8b544b23e136e6eff90546573d21bf23266150114bdfab",
	        "Created": "2023-10-06T02:11:53.908889024Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2269284,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-06T02:11:54.261990564Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/1bf82584606a26622e8b544b23e136e6eff90546573d21bf23266150114bdfab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1bf82584606a26622e8b544b23e136e6eff90546573d21bf23266150114bdfab/hostname",
	        "HostsPath": "/var/lib/docker/containers/1bf82584606a26622e8b544b23e136e6eff90546573d21bf23266150114bdfab/hosts",
	        "LogPath": "/var/lib/docker/containers/1bf82584606a26622e8b544b23e136e6eff90546573d21bf23266150114bdfab/1bf82584606a26622e8b544b23e136e6eff90546573d21bf23266150114bdfab-json.log",
	        "Name": "/addons-891734",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-891734:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-891734",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/68f1962a5a429132c6b10da0f3c3a2050999251166dda2b568ce3b324f48ec1c-init/diff:/var/lib/docker/overlay2/ab4f4fc5e8cd2d4bbf1718e21432b9cb0d953b7279be1c1cbb7bd550f03b46dc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68f1962a5a429132c6b10da0f3c3a2050999251166dda2b568ce3b324f48ec1c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68f1962a5a429132c6b10da0f3c3a2050999251166dda2b568ce3b324f48ec1c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68f1962a5a429132c6b10da0f3c3a2050999251166dda2b568ce3b324f48ec1c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-891734",
	                "Source": "/var/lib/docker/volumes/addons-891734/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-891734",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-891734",
	                "name.minikube.sigs.k8s.io": "addons-891734",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b39a3e4d2d8e96f0c5748a08c9fec6e673094f3d959ce53428ceee14c4e1f77",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35264"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35263"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35260"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35262"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35261"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9b39a3e4d2d8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-891734": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1bf82584606a",
	                        "addons-891734"
	                    ],
	                    "NetworkID": "458fc62b716778192a84de8c21d219c670d623b883cdb14f2a90271df9e58fa2",
	                    "EndpointID": "fdd34c50ab09b27a0e10c9dd6123a7b7c580a2808628d450283430067791aac8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-891734 -n addons-891734
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-891734 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-891734 logs -n 25: (1.627122871s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC | 06 Oct 23 02:11 UTC |
	| delete  | -p download-only-310473                                                                     | download-only-310473   | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC | 06 Oct 23 02:11 UTC |
	| delete  | -p download-only-310473                                                                     | download-only-310473   | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC | 06 Oct 23 02:11 UTC |
	| start   | --download-only -p                                                                          | download-docker-058597 | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC |                     |
	|         | download-docker-058597                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-058597                                                                   | download-docker-058597 | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC | 06 Oct 23 02:11 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-306268   | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC |                     |
	|         | binary-mirror-306268                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43951                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-306268                                                                     | binary-mirror-306268   | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC | 06 Oct 23 02:11 UTC |
	| addons  | enable dashboard -p                                                                         | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC |                     |
	|         | addons-891734                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC |                     |
	|         | addons-891734                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-891734 --wait=true                                                                | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC | 06 Oct 23 02:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-891734 ip                                                                            | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:14 UTC | 06 Oct 23 02:14 UTC |
	| addons  | addons-891734 addons disable                                                                | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:14 UTC | 06 Oct 23 02:14 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-891734 addons                                                                        | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:14 UTC | 06 Oct 23 02:14 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:15 UTC | 06 Oct 23 02:15 UTC |
	|         | addons-891734                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-891734 ssh curl -s                                                                   | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:15 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-891734 addons                                                                        | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:15 UTC | 06 Oct 23 02:15 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-891734 addons                                                                        | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:15 UTC | 06 Oct 23 02:15 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-891734 ssh cat                                                                       | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:15 UTC | 06 Oct 23 02:15 UTC |
	|         | /opt/local-path-provisioner/pvc-c94cf330-8988-48fe-a88a-dba4db821981_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-891734 addons disable                                                                | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:15 UTC | 06 Oct 23 02:15 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:16 UTC | 06 Oct 23 02:16 UTC |
	|         | -p addons-891734                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:16 UTC | 06 Oct 23 02:16 UTC |
	|         | addons-891734                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:16 UTC | 06 Oct 23 02:16 UTC |
	|         | -p addons-891734                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-891734 ip                                                                            | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:17 UTC | 06 Oct 23 02:17 UTC |
	| addons  | addons-891734 addons disable                                                                | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:17 UTC | 06 Oct 23 02:17 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-891734 addons disable                                                                | addons-891734          | jenkins | v1.31.2 | 06 Oct 23 02:17 UTC | 06 Oct 23 02:17 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 02:11:30
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 02:11:30.835031 2268811 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:11:30.835254 2268811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:11:30.835282 2268811 out.go:309] Setting ErrFile to fd 2...
	I1006 02:11:30.835305 2268811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:11:30.835563 2268811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:11:30.836026 2268811 out.go:303] Setting JSON to false
	I1006 02:11:30.837041 2268811 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":42837,"bootTime":1696515454,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:11:30.837144 2268811 start.go:138] virtualization:  
	I1006 02:11:30.839816 2268811 out.go:177] * [addons-891734] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1006 02:11:30.842141 2268811 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 02:11:30.844363 2268811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:11:30.842331 2268811 notify.go:220] Checking for updates...
	I1006 02:11:30.848766 2268811 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:11:30.850995 2268811 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:11:30.853466 2268811 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 02:11:30.855387 2268811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 02:11:30.857704 2268811 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:11:30.881443 2268811 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:11:30.881550 2268811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:11:30.958515 2268811 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-06 02:11:30.947243151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:11:30.958636 2268811 docker.go:295] overlay module found
	I1006 02:11:30.960855 2268811 out.go:177] * Using the docker driver based on user configuration
	I1006 02:11:30.962826 2268811 start.go:298] selected driver: docker
	I1006 02:11:30.962848 2268811 start.go:902] validating driver "docker" against <nil>
	I1006 02:11:30.962861 2268811 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 02:11:30.963500 2268811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:11:31.033598 2268811 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-06 02:11:31.024233557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:11:31.033771 2268811 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1006 02:11:31.033998 2268811 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 02:11:31.036047 2268811 out.go:177] * Using Docker driver with root privileges
	I1006 02:11:31.038187 2268811 cni.go:84] Creating CNI manager for ""
	I1006 02:11:31.038209 2268811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:11:31.038222 2268811 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 02:11:31.038244 2268811 start_flags.go:323] config:
	{Name:addons-891734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-891734 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:11:31.040701 2268811 out.go:177] * Starting control plane node addons-891734 in cluster addons-891734
	I1006 02:11:31.042809 2268811 cache.go:122] Beginning downloading kic base image for docker with crio
	I1006 02:11:31.044942 2268811 out.go:177] * Pulling base image ...
	I1006 02:11:31.047534 2268811 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:11:31.047600 2268811 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1006 02:11:31.047615 2268811 cache.go:57] Caching tarball of preloaded images
	I1006 02:11:31.047628 2268811 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1006 02:11:31.047695 2268811 preload.go:174] Found /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 02:11:31.047705 2268811 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1006 02:11:31.048056 2268811 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/config.json ...
	I1006 02:11:31.048086 2268811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/config.json: {Name:mke28d888ee7f45bbdfcf5a15fa1c76913db274f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:11:31.065464 2268811 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1006 02:11:31.065585 2268811 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1006 02:11:31.065604 2268811 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1006 02:11:31.065609 2268811 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1006 02:11:31.065617 2268811 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1006 02:11:31.065623 2268811 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae from local cache
	I1006 02:11:46.845699 2268811 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae from cached tarball
	I1006 02:11:46.845739 2268811 cache.go:195] Successfully downloaded all kic artifacts
	I1006 02:11:46.845818 2268811 start.go:365] acquiring machines lock for addons-891734: {Name:mk7c8629d979a3a412e9f7fc701a702867a6f91d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:11:46.847356 2268811 start.go:369] acquired machines lock for "addons-891734" in 1.503781ms
	I1006 02:11:46.847402 2268811 start.go:93] Provisioning new machine with config: &{Name:addons-891734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-891734 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 02:11:46.847499 2268811 start.go:125] createHost starting for "" (driver="docker")
	I1006 02:11:46.849977 2268811 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1006 02:11:46.850234 2268811 start.go:159] libmachine.API.Create for "addons-891734" (driver="docker")
	I1006 02:11:46.850271 2268811 client.go:168] LocalClient.Create starting
	I1006 02:11:46.850410 2268811 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem
	I1006 02:11:47.343613 2268811 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem
	I1006 02:11:47.644538 2268811 cli_runner.go:164] Run: docker network inspect addons-891734 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 02:11:47.663868 2268811 cli_runner.go:211] docker network inspect addons-891734 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 02:11:47.663967 2268811 network_create.go:281] running [docker network inspect addons-891734] to gather additional debugging logs...
	I1006 02:11:47.663988 2268811 cli_runner.go:164] Run: docker network inspect addons-891734
	W1006 02:11:47.680981 2268811 cli_runner.go:211] docker network inspect addons-891734 returned with exit code 1
	I1006 02:11:47.681038 2268811 network_create.go:284] error running [docker network inspect addons-891734]: docker network inspect addons-891734: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-891734 not found
	I1006 02:11:47.681074 2268811 network_create.go:286] output of [docker network inspect addons-891734]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-891734 not found
	
	** /stderr **
	I1006 02:11:47.681175 2268811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:11:47.699130 2268811 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002583100}
	I1006 02:11:47.699169 2268811 network_create.go:124] attempt to create docker network addons-891734 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 02:11:47.699234 2268811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-891734 addons-891734
	I1006 02:11:47.780130 2268811 network_create.go:108] docker network addons-891734 192.168.49.0/24 created
	I1006 02:11:47.780179 2268811 kic.go:118] calculated static IP "192.168.49.2" for the "addons-891734" container
	I1006 02:11:47.780255 2268811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 02:11:47.797973 2268811 cli_runner.go:164] Run: docker volume create addons-891734 --label name.minikube.sigs.k8s.io=addons-891734 --label created_by.minikube.sigs.k8s.io=true
	I1006 02:11:47.817509 2268811 oci.go:103] Successfully created a docker volume addons-891734
	I1006 02:11:47.817607 2268811 cli_runner.go:164] Run: docker run --rm --name addons-891734-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-891734 --entrypoint /usr/bin/test -v addons-891734:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1006 02:11:49.668453 2268811 cli_runner.go:217] Completed: docker run --rm --name addons-891734-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-891734 --entrypoint /usr/bin/test -v addons-891734:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (1.850798132s)
	I1006 02:11:49.668484 2268811 oci.go:107] Successfully prepared a docker volume addons-891734
	I1006 02:11:49.668516 2268811 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:11:49.668534 2268811 kic.go:191] Starting extracting preloaded images to volume ...
	I1006 02:11:49.668620 2268811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-891734:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 02:11:53.825685 2268811 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-891734:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.157022927s)
	I1006 02:11:53.825718 2268811 kic.go:200] duration metric: took 4.157180 seconds to extract preloaded images to volume
	W1006 02:11:53.825870 2268811 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 02:11:53.825992 2268811 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 02:11:53.892655 2268811 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-891734 --name addons-891734 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-891734 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-891734 --network addons-891734 --ip 192.168.49.2 --volume addons-891734:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1006 02:11:54.270458 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Running}}
	I1006 02:11:54.294375 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:11:54.318927 2268811 cli_runner.go:164] Run: docker exec addons-891734 stat /var/lib/dpkg/alternatives/iptables
	I1006 02:11:54.413105 2268811 oci.go:144] the created container "addons-891734" has a running status.
	I1006 02:11:54.413136 2268811 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa...
	I1006 02:11:54.726525 2268811 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 02:11:54.749880 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:11:54.774636 2268811 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 02:11:54.774661 2268811 kic_runner.go:114] Args: [docker exec --privileged addons-891734 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 02:11:54.873498 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:11:54.902066 2268811 machine.go:88] provisioning docker machine ...
	I1006 02:11:54.902108 2268811 ubuntu.go:169] provisioning hostname "addons-891734"
	I1006 02:11:54.902174 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:11:54.930351 2268811 main.go:141] libmachine: Using SSH client type: native
	I1006 02:11:54.930778 2268811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35264 <nil> <nil>}
	I1006 02:11:54.930796 2268811 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-891734 && echo "addons-891734" | sudo tee /etc/hostname
	I1006 02:11:54.931396 2268811 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38084->127.0.0.1:35264: read: connection reset by peer
	I1006 02:11:58.079371 2268811 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-891734
	
	I1006 02:11:58.079453 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:11:58.097830 2268811 main.go:141] libmachine: Using SSH client type: native
	I1006 02:11:58.098268 2268811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35264 <nil> <nil>}
	I1006 02:11:58.098292 2268811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-891734' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-891734/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-891734' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 02:11:58.228591 2268811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 02:11:58.228631 2268811 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17314-2262959/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-2262959/.minikube}
	I1006 02:11:58.228660 2268811 ubuntu.go:177] setting up certificates
	I1006 02:11:58.228669 2268811 provision.go:83] configureAuth start
	I1006 02:11:58.228734 2268811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-891734
	I1006 02:11:58.250423 2268811 provision.go:138] copyHostCerts
	I1006 02:11:58.250504 2268811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem (1675 bytes)
	I1006 02:11:58.250632 2268811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem (1082 bytes)
	I1006 02:11:58.250740 2268811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem (1123 bytes)
	I1006 02:11:58.250791 2268811 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem org=jenkins.addons-891734 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-891734]
	I1006 02:11:58.550986 2268811 provision.go:172] copyRemoteCerts
	I1006 02:11:58.551087 2268811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 02:11:58.551129 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:11:58.570204 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:11:58.665874 2268811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1006 02:11:58.695385 2268811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 02:11:58.726064 2268811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 02:11:58.756041 2268811 provision.go:86] duration metric: configureAuth took 527.354642ms
	I1006 02:11:58.756066 2268811 ubuntu.go:193] setting minikube options for container-runtime
	I1006 02:11:58.756268 2268811 config.go:182] Loaded profile config "addons-891734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:11:58.756370 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:11:58.776855 2268811 main.go:141] libmachine: Using SSH client type: native
	I1006 02:11:58.777281 2268811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35264 <nil> <nil>}
	I1006 02:11:58.777303 2268811 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 02:11:59.031101 2268811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 02:11:59.031123 2268811 machine.go:91] provisioned docker machine in 4.129034863s
	I1006 02:11:59.031132 2268811 client.go:171] LocalClient.Create took 12.180834904s
	I1006 02:11:59.031145 2268811 start.go:167] duration metric: libmachine.API.Create for "addons-891734" took 12.18091213s
	I1006 02:11:59.031153 2268811 start.go:300] post-start starting for "addons-891734" (driver="docker")
	I1006 02:11:59.031163 2268811 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 02:11:59.031224 2268811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 02:11:59.031261 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:11:59.049129 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:11:59.150165 2268811 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 02:11:59.154393 2268811 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 02:11:59.154432 2268811 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1006 02:11:59.154461 2268811 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1006 02:11:59.154477 2268811 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1006 02:11:59.154488 2268811 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/addons for local assets ...
	I1006 02:11:59.154565 2268811 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/files for local assets ...
	I1006 02:11:59.154591 2268811 start.go:303] post-start completed in 123.432778ms
	I1006 02:11:59.154912 2268811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-891734
	I1006 02:11:59.174304 2268811 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/config.json ...
	I1006 02:11:59.174583 2268811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:11:59.174642 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:11:59.192817 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:11:59.285334 2268811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 02:11:59.291271 2268811 start.go:128] duration metric: createHost completed in 12.443755753s
	I1006 02:11:59.291291 2268811 start.go:83] releasing machines lock for "addons-891734", held for 12.443913157s
	I1006 02:11:59.291361 2268811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-891734
	I1006 02:11:59.309300 2268811 ssh_runner.go:195] Run: cat /version.json
	I1006 02:11:59.309339 2268811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 02:11:59.309351 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:11:59.309423 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:11:59.331558 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:11:59.338357 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:11:59.423674 2268811 ssh_runner.go:195] Run: systemctl --version
	I1006 02:11:59.561452 2268811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 02:11:59.709891 2268811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 02:11:59.715563 2268811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:11:59.741409 2268811 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1006 02:11:59.741510 2268811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:11:59.775436 2268811 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1006 02:11:59.775461 2268811 start.go:472] detecting cgroup driver to use...
	I1006 02:11:59.775510 2268811 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1006 02:11:59.775594 2268811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 02:11:59.793816 2268811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 02:11:59.807950 2268811 docker.go:198] disabling cri-docker service (if available) ...
	I1006 02:11:59.808039 2268811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 02:11:59.824471 2268811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 02:11:59.841885 2268811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 02:11:59.939124 2268811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 02:12:00.098294 2268811 docker.go:214] disabling docker service ...
	I1006 02:12:00.098417 2268811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 02:12:00.129718 2268811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 02:12:00.148458 2268811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 02:12:00.260869 2268811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 02:12:00.368935 2268811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 02:12:00.384384 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 02:12:00.408256 2268811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1006 02:12:00.408326 2268811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:12:00.422512 2268811 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 02:12:00.422680 2268811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:12:00.436975 2268811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:12:00.450275 2268811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:12:00.463642 2268811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 02:12:00.475982 2268811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 02:12:00.486717 2268811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 02:12:00.497446 2268811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 02:12:00.596066 2268811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 02:12:00.721309 2268811 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 02:12:00.721444 2268811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 02:12:00.726792 2268811 start.go:540] Will wait 60s for crictl version
	I1006 02:12:00.726855 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:12:00.731342 2268811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 02:12:00.773859 2268811 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1006 02:12:00.773959 2268811 ssh_runner.go:195] Run: crio --version
	I1006 02:12:00.823152 2268811 ssh_runner.go:195] Run: crio --version
	I1006 02:12:00.869898 2268811 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1006 02:12:00.871778 2268811 cli_runner.go:164] Run: docker network inspect addons-891734 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:12:00.892495 2268811 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 02:12:00.897356 2268811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 02:12:00.911134 2268811 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:12:00.911208 2268811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 02:12:00.974959 2268811 crio.go:496] all images are preloaded for cri-o runtime.
	I1006 02:12:00.974984 2268811 crio.go:415] Images already preloaded, skipping extraction
	I1006 02:12:00.975067 2268811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 02:12:01.020816 2268811 crio.go:496] all images are preloaded for cri-o runtime.
	I1006 02:12:01.020841 2268811 cache_images.go:84] Images are preloaded, skipping loading
	I1006 02:12:01.020920 2268811 ssh_runner.go:195] Run: crio config
	I1006 02:12:01.097314 2268811 cni.go:84] Creating CNI manager for ""
	I1006 02:12:01.097347 2268811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:12:01.097379 2268811 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1006 02:12:01.097404 2268811 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-891734 NodeName:addons-891734 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 02:12:01.097547 2268811 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-891734"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 02:12:01.097649 2268811 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-891734 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-891734 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1006 02:12:01.097727 2268811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1006 02:12:01.109334 2268811 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 02:12:01.109437 2268811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 02:12:01.120880 2268811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1006 02:12:01.143585 2268811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 02:12:01.165793 2268811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1006 02:12:01.187816 2268811 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 02:12:01.192988 2268811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 02:12:01.207009 2268811 certs.go:56] Setting up /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734 for IP: 192.168.49.2
	I1006 02:12:01.207041 2268811 certs.go:190] acquiring lock for shared ca certs: {Name:mk4532656c287f04abff160cc5263fddcb69ac4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:01.207202 2268811 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key
	I1006 02:12:01.551237 2268811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt ...
	I1006 02:12:01.551273 2268811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt: {Name:mk27eb232d380889b0983d8f9a30ef73671df562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:01.551511 2268811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key ...
	I1006 02:12:01.551530 2268811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key: {Name:mk58202ba50f11d0f10070a041974158dcb01839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:01.552346 2268811 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key
	I1006 02:12:01.659007 2268811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.crt ...
	I1006 02:12:01.659065 2268811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.crt: {Name:mkd426817e44d12245ed38dc4032297e8ce2b595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:01.659678 2268811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key ...
	I1006 02:12:01.659699 2268811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key: {Name:mk01683eca45f2e0bbcb37035204a8983a679982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:01.660292 2268811 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.key
	I1006 02:12:01.660310 2268811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt with IP's: []
	I1006 02:12:02.177565 2268811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt ...
	I1006 02:12:02.177597 2268811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: {Name:mkc6f306c2235a988fe9ead7a9f15ba3502861af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:02.177793 2268811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.key ...
	I1006 02:12:02.177806 2268811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.key: {Name:mkd3a20ae0230903ea9590b824bc14f50eb78313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:02.178395 2268811 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/apiserver.key.dd3b5fb2
	I1006 02:12:02.178419 2268811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1006 02:12:02.494466 2268811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/apiserver.crt.dd3b5fb2 ...
	I1006 02:12:02.494497 2268811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/apiserver.crt.dd3b5fb2: {Name:mk5812e503f4f9c86353bd18b0f00d8d56261eb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:02.494690 2268811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/apiserver.key.dd3b5fb2 ...
	I1006 02:12:02.494702 2268811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/apiserver.key.dd3b5fb2: {Name:mk54d02feb9bade69cb43b273dc6175ad07a3ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:02.495423 2268811 certs.go:337] copying /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/apiserver.crt
	I1006 02:12:02.495507 2268811 certs.go:341] copying /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/apiserver.key
	I1006 02:12:02.495560 2268811 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/proxy-client.key
	I1006 02:12:02.495581 2268811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/proxy-client.crt with IP's: []
	I1006 02:12:03.320650 2268811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/proxy-client.crt ...
	I1006 02:12:03.320685 2268811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/proxy-client.crt: {Name:mk499dcaae145ec47c0d3cfb5b4b0aea18d1d1f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:03.320883 2268811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/proxy-client.key ...
	I1006 02:12:03.320896 2268811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/proxy-client.key: {Name:mkb5eb7c0ad6676ff071a13171ea0b435fc9da1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:03.321811 2268811 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 02:12:03.321860 2268811 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem (1082 bytes)
	I1006 02:12:03.321887 2268811 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem (1123 bytes)
	I1006 02:12:03.321916 2268811 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem (1675 bytes)
	I1006 02:12:03.322543 2268811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1006 02:12:03.352037 2268811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 02:12:03.380457 2268811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 02:12:03.409037 2268811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 02:12:03.440446 2268811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 02:12:03.469226 2268811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 02:12:03.497868 2268811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 02:12:03.526059 2268811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 02:12:03.554500 2268811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 02:12:03.586489 2268811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 02:12:03.608194 2268811 ssh_runner.go:195] Run: openssl version
	I1006 02:12:03.615469 2268811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 02:12:03.627564 2268811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:12:03.632409 2268811 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  6 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:12:03.632509 2268811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:12:03.640942 2268811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 02:12:03.652508 2268811 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1006 02:12:03.657008 2268811 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1006 02:12:03.657084 2268811 kubeadm.go:404] StartCluster: {Name:addons-891734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-891734 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:12:03.657170 2268811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 02:12:03.657230 2268811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 02:12:03.698911 2268811 cri.go:89] found id: ""
	I1006 02:12:03.699028 2268811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 02:12:03.709880 2268811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 02:12:03.721478 2268811 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1006 02:12:03.721558 2268811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 02:12:03.732442 2268811 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 02:12:03.732523 2268811 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 02:12:03.786804 2268811 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1006 02:12:03.787029 2268811 kubeadm.go:322] [preflight] Running pre-flight checks
	I1006 02:12:03.832354 2268811 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1006 02:12:03.832490 2268811 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1006 02:12:03.832578 2268811 kubeadm.go:322] OS: Linux
	I1006 02:12:03.832671 2268811 kubeadm.go:322] CGROUPS_CPU: enabled
	I1006 02:12:03.832769 2268811 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1006 02:12:03.832844 2268811 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1006 02:12:03.832926 2268811 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1006 02:12:03.833009 2268811 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1006 02:12:03.833091 2268811 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1006 02:12:03.833171 2268811 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1006 02:12:03.833251 2268811 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1006 02:12:03.833333 2268811 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1006 02:12:03.914744 2268811 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 02:12:03.914913 2268811 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 02:12:03.915040 2268811 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1006 02:12:04.196234 2268811 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 02:12:04.200524 2268811 out.go:204]   - Generating certificates and keys ...
	I1006 02:12:04.200797 2268811 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1006 02:12:04.200898 2268811 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1006 02:12:05.168226 2268811 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 02:12:05.552576 2268811 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1006 02:12:06.176769 2268811 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1006 02:12:06.875869 2268811 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1006 02:12:07.031864 2268811 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1006 02:12:07.032238 2268811 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-891734 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 02:12:07.310338 2268811 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1006 02:12:07.310630 2268811 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-891734 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 02:12:07.586331 2268811 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 02:12:07.997532 2268811 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 02:12:09.066432 2268811 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1006 02:12:09.066792 2268811 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 02:12:09.987863 2268811 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 02:12:11.391860 2268811 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 02:12:11.657087 2268811 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 02:12:12.129541 2268811 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 02:12:12.130252 2268811 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 02:12:12.133093 2268811 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 02:12:12.137732 2268811 out.go:204]   - Booting up control plane ...
	I1006 02:12:12.137911 2268811 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 02:12:12.137988 2268811 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 02:12:12.138403 2268811 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 02:12:12.149408 2268811 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 02:12:12.150292 2268811 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 02:12:12.150569 2268811 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1006 02:12:12.247187 2268811 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1006 02:12:19.249529 2268811 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002379 seconds
	I1006 02:12:19.249661 2268811 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 02:12:19.266614 2268811 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 02:12:19.790269 2268811 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 02:12:19.790458 2268811 kubeadm.go:322] [mark-control-plane] Marking the node addons-891734 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 02:12:20.301690 2268811 kubeadm.go:322] [bootstrap-token] Using token: r0hmso.shy8bkgrdndkqll2
	I1006 02:12:20.303701 2268811 out.go:204]   - Configuring RBAC rules ...
	I1006 02:12:20.303836 2268811 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 02:12:20.309200 2268811 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 02:12:20.317528 2268811 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 02:12:20.323371 2268811 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 02:12:20.329586 2268811 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 02:12:20.333658 2268811 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 02:12:20.350215 2268811 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 02:12:20.598474 2268811 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1006 02:12:20.738801 2268811 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1006 02:12:20.739996 2268811 kubeadm.go:322] 
	I1006 02:12:20.740063 2268811 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1006 02:12:20.740069 2268811 kubeadm.go:322] 
	I1006 02:12:20.740141 2268811 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1006 02:12:20.740146 2268811 kubeadm.go:322] 
	I1006 02:12:20.740170 2268811 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1006 02:12:20.740225 2268811 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 02:12:20.740272 2268811 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 02:12:20.740278 2268811 kubeadm.go:322] 
	I1006 02:12:20.740328 2268811 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1006 02:12:20.740333 2268811 kubeadm.go:322] 
	I1006 02:12:20.740378 2268811 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 02:12:20.740382 2268811 kubeadm.go:322] 
	I1006 02:12:20.740432 2268811 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1006 02:12:20.740502 2268811 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 02:12:20.740566 2268811 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 02:12:20.740572 2268811 kubeadm.go:322] 
	I1006 02:12:20.740651 2268811 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 02:12:20.740722 2268811 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1006 02:12:20.740727 2268811 kubeadm.go:322] 
	I1006 02:12:20.740806 2268811 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token r0hmso.shy8bkgrdndkqll2 \
	I1006 02:12:20.740902 2268811 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:80472cab419bab12b6f684b1710332d34cec39ebaa3946bc81d26724ddc88d38 \
	I1006 02:12:20.740922 2268811 kubeadm.go:322] 	--control-plane 
	I1006 02:12:20.740926 2268811 kubeadm.go:322] 
	I1006 02:12:20.741005 2268811 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1006 02:12:20.741010 2268811 kubeadm.go:322] 
	I1006 02:12:20.743156 2268811 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token r0hmso.shy8bkgrdndkqll2 \
	I1006 02:12:20.743261 2268811 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:80472cab419bab12b6f684b1710332d34cec39ebaa3946bc81d26724ddc88d38 
	I1006 02:12:20.744340 2268811 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1006 02:12:20.744446 2268811 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 02:12:20.744459 2268811 cni.go:84] Creating CNI manager for ""
	I1006 02:12:20.744468 2268811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:12:20.747072 2268811 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1006 02:12:20.749345 2268811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 02:12:20.762138 2268811 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1006 02:12:20.762161 2268811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1006 02:12:20.802951 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 02:12:21.686370 2268811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 02:12:21.686429 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:21.686508 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=84890cb24d0240d9d992d7c7712ee519ceed4154 minikube.k8s.io/name=addons-891734 minikube.k8s.io/updated_at=2023_10_06T02_12_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:21.846033 2268811 ops.go:34] apiserver oom_adj: -16
	I1006 02:12:21.846125 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:21.964522 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:22.567887 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:23.067296 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:23.568362 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:24.067864 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:24.567301 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:25.067827 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:25.567462 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:26.067865 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:26.567366 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:27.067891 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:27.567674 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:28.067822 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:28.568264 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:29.067911 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:29.567882 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:30.068361 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:30.567977 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:31.068134 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:31.567333 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:32.068259 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:32.567928 2268811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:12:32.662447 2268811 kubeadm.go:1081] duration metric: took 10.976074637s to wait for elevateKubeSystemPrivileges.
	I1006 02:12:32.662480 2268811 kubeadm.go:406] StartCluster complete in 29.005426936s
	I1006 02:12:32.662498 2268811 settings.go:142] acquiring lock: {Name:mkbf8759b61c125112e0d07f4c53bb4e84a6de4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:32.663164 2268811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:12:32.663580 2268811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/kubeconfig: {Name:mkf2ae4867a3638ac11dac5beae6919a4f83b43a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:12:32.664471 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 02:12:32.664757 2268811 config.go:182] Loaded profile config "addons-891734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:12:32.664872 2268811 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1006 02:12:32.664955 2268811 addons.go:69] Setting volumesnapshots=true in profile "addons-891734"
	I1006 02:12:32.664969 2268811 addons.go:231] Setting addon volumesnapshots=true in "addons-891734"
	I1006 02:12:32.665059 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:32.665540 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.667007 2268811 addons.go:69] Setting ingress-dns=true in profile "addons-891734"
	I1006 02:12:32.667033 2268811 addons.go:231] Setting addon ingress-dns=true in "addons-891734"
	I1006 02:12:32.667103 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:32.667564 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.668023 2268811 addons.go:69] Setting cloud-spanner=true in profile "addons-891734"
	I1006 02:12:32.668050 2268811 addons.go:231] Setting addon cloud-spanner=true in "addons-891734"
	I1006 02:12:32.668084 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:32.668482 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.668589 2268811 addons.go:69] Setting inspektor-gadget=true in profile "addons-891734"
	I1006 02:12:32.668606 2268811 addons.go:231] Setting addon inspektor-gadget=true in "addons-891734"
	I1006 02:12:32.668636 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:32.669014 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.678273 2268811 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-891734"
	I1006 02:12:32.678396 2268811 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-891734"
	I1006 02:12:32.678479 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:32.679201 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.681926 2268811 addons.go:69] Setting metrics-server=true in profile "addons-891734"
	I1006 02:12:32.686091 2268811 addons.go:231] Setting addon metrics-server=true in "addons-891734"
	I1006 02:12:32.686188 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:32.682085 2268811 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-891734"
	I1006 02:12:32.700689 2268811 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-891734"
	I1006 02:12:32.701210 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:32.682094 2268811 addons.go:69] Setting registry=true in profile "addons-891734"
	I1006 02:12:32.682099 2268811 addons.go:69] Setting storage-provisioner=true in profile "addons-891734"
	I1006 02:12:32.682103 2268811 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-891734"
	I1006 02:12:32.685893 2268811 addons.go:69] Setting default-storageclass=true in profile "addons-891734"
	I1006 02:12:32.685905 2268811 addons.go:69] Setting gcp-auth=true in profile "addons-891734"
	I1006 02:12:32.685910 2268811 addons.go:69] Setting ingress=true in profile "addons-891734"
	I1006 02:12:32.701446 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.712088 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.719710 2268811 addons.go:231] Setting addon registry=true in "addons-891734"
	I1006 02:12:32.719783 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:32.720230 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.738700 2268811 mustload.go:65] Loading cluster: addons-891734
	I1006 02:12:32.739171 2268811 config.go:182] Loaded profile config "addons-891734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:12:32.742021 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.742372 2268811 addons.go:231] Setting addon ingress=true in "addons-891734"
	I1006 02:12:32.787852 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:32.788346 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.819001 2268811 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1006 02:12:32.813150 2268811 addons.go:231] Setting addon storage-provisioner=true in "addons-891734"
	I1006 02:12:32.813170 2268811 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-891734"
	I1006 02:12:32.813179 2268811 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-891734"
	I1006 02:12:32.826297 2268811 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1006 02:12:32.829353 2268811 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 02:12:32.829428 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1006 02:12:32.829530 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:32.833823 2268811 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1006 02:12:32.836066 2268811 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1006 02:12:32.836089 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1006 02:12:32.836157 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:32.845064 2268811 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1006 02:12:32.845133 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1006 02:12:32.845246 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:32.869719 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:32.874748 2268811 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1006 02:12:32.879216 2268811 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1006 02:12:32.874843 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.874735 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.875106 2268811 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-891734" context rescaled to 1 replicas
	I1006 02:12:32.875120 2268811 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1006 02:12:32.875427 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:32.886738 2268811 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1006 02:12:32.899183 2268811 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1006 02:12:32.899214 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1006 02:12:32.899282 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:32.887573 2268811 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 02:12:32.902386 2268811 out.go:177] * Verifying Kubernetes components...
	I1006 02:12:32.906333 2268811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:12:32.908956 2268811 out.go:177]   - Using image docker.io/registry:2.8.1
	I1006 02:12:32.903283 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 02:12:32.914432 2268811 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1006 02:12:32.914453 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1006 02:12:32.914518 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:32.912657 2268811 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1006 02:12:32.950465 2268811 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1006 02:12:32.954655 2268811 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 02:12:32.954676 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1006 02:12:32.954744 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:32.960562 2268811 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1006 02:12:32.951613 2268811 node_ready.go:35] waiting up to 6m0s for node "addons-891734" to be "Ready" ...
	I1006 02:12:32.950474 2268811 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1006 02:12:32.967933 2268811 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1006 02:12:32.968010 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1006 02:12:32.968111 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:33.007132 2268811 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1006 02:12:33.009078 2268811 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 02:12:33.014253 2268811 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 02:12:33.014344 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 02:12:33.014450 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:33.032347 2268811 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1006 02:12:33.038006 2268811 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1006 02:12:33.043683 2268811 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1006 02:12:33.047166 2268811 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1006 02:12:33.047234 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1006 02:12:33.047336 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:33.037532 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:33.070269 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:33.081172 2268811 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.1
	I1006 02:12:33.084225 2268811 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1006 02:12:33.086442 2268811 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1006 02:12:33.088911 2268811 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 02:12:33.088930 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1006 02:12:33.095225 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:33.166811 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:33.180434 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:33.187768 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:33.215700 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:33.220594 2268811 addons.go:231] Setting addon default-storageclass=true in "addons-891734"
	I1006 02:12:33.220631 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:33.221071 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:33.221996 2268811 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-891734"
	I1006 02:12:33.222042 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:33.222492 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:33.262134 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:33.262953 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:33.282394 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:33.303265 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:33.366682 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:33.405563 2268811 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1006 02:12:33.407505 2268811 out.go:177]   - Using image docker.io/busybox:stable
	I1006 02:12:33.410000 2268811 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 02:12:33.410018 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1006 02:12:33.410082 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:33.407411 2268811 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 02:12:33.410283 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 02:12:33.410319 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:33.483505 2268811 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1006 02:12:33.483540 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1006 02:12:33.506724 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:33.512919 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:33.566826 2268811 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1006 02:12:33.566852 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1006 02:12:33.572804 2268811 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1006 02:12:33.572829 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1006 02:12:33.640749 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 02:12:33.643833 2268811 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1006 02:12:33.643853 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1006 02:12:33.689059 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 02:12:33.697852 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1006 02:12:33.700184 2268811 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1006 02:12:33.700249 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1006 02:12:33.700408 2268811 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1006 02:12:33.700438 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1006 02:12:33.731638 2268811 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1006 02:12:33.731708 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1006 02:12:33.802340 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1006 02:12:33.802717 2268811 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1006 02:12:33.802762 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1006 02:12:33.805405 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 02:12:33.840335 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 02:12:33.841683 2268811 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1006 02:12:33.841730 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1006 02:12:33.888013 2268811 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1006 02:12:33.888066 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1006 02:12:33.924672 2268811 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1006 02:12:33.924741 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1006 02:12:34.006739 2268811 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1006 02:12:34.006817 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1006 02:12:34.024220 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 02:12:34.043002 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 02:12:34.056996 2268811 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1006 02:12:34.057018 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1006 02:12:34.112770 2268811 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1006 02:12:34.112798 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1006 02:12:34.129312 2268811 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1006 02:12:34.129377 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1006 02:12:34.208671 2268811 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1006 02:12:34.208741 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1006 02:12:34.219489 2268811 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 02:12:34.219559 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1006 02:12:34.275381 2268811 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1006 02:12:34.275451 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1006 02:12:34.342505 2268811 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1006 02:12:34.342574 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1006 02:12:34.404322 2268811 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1006 02:12:34.404381 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1006 02:12:34.407889 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 02:12:34.504348 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1006 02:12:34.545757 2268811 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1006 02:12:34.545826 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1006 02:12:34.557045 2268811 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 02:12:34.557143 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1006 02:12:34.669408 2268811 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1006 02:12:34.669478 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1006 02:12:34.669841 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 02:12:34.750323 2268811 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1006 02:12:34.750393 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1006 02:12:34.818550 2268811 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1006 02:12:34.818622 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1006 02:12:34.929641 2268811 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1006 02:12:34.929712 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1006 02:12:34.989681 2268811 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 02:12:34.989762 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1006 02:12:35.113919 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 02:12:35.349235 2268811 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.436555437s)
	I1006 02:12:35.349338 2268811 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1006 02:12:35.565199 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:12:37.782194 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:12:38.102848 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.462043328s)
	I1006 02:12:38.103026 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.413870474s)
	I1006 02:12:38.103118 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.405192173s)
	I1006 02:12:38.103198 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.30078758s)
	I1006 02:12:38.103262 2268811 addons.go:467] Verifying addon registry=true in "addons-891734"
	I1006 02:12:38.105979 2268811 out.go:177] * Verifying registry addon...
	I1006 02:12:38.109682 2268811 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1006 02:12:38.167533 2268811 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1006 02:12:38.167611 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:38.197690 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:38.400321 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.594847355s)
	I1006 02:12:38.729512 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:39.124220 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.283813529s)
	I1006 02:12:39.124392 2268811 addons.go:467] Verifying addon ingress=true in "addons-891734"
	I1006 02:12:39.124486 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.08138862s)
	I1006 02:12:39.126665 2268811 out.go:177] * Verifying ingress addon...
	I1006 02:12:39.124679 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.716715391s)
	I1006 02:12:39.124734 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.620305471s)
	I1006 02:12:39.124814 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.454931125s)
	I1006 02:12:39.124332 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.100041083s)
	I1006 02:12:39.129909 2268811 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1006 02:12:39.130062 2268811 addons.go:467] Verifying addon metrics-server=true in "addons-891734"
	W1006 02:12:39.130098 2268811 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 02:12:39.130114 2268811 retry.go:31] will retry after 340.613103ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 02:12:39.146944 2268811 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1006 02:12:39.147003 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1006 02:12:39.168049 2268811 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1006 02:12:39.170039 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:39.202005 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:39.425539 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.311522405s)
	I1006 02:12:39.425619 2268811 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-891734"
	I1006 02:12:39.428063 2268811 out.go:177] * Verifying csi-hostpath-driver addon...
	I1006 02:12:39.430902 2268811 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1006 02:12:39.436877 2268811 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1006 02:12:39.436931 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:39.446556 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:39.471259 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 02:12:39.675074 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:39.703876 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:39.991971 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:40.101614 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:12:40.188665 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:40.202679 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:40.470148 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:40.677841 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:40.702931 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:40.969508 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:41.174653 2268811 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.703307417s)
	I1006 02:12:41.177132 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:41.203585 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:41.451847 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:41.675296 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:41.702680 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:41.762400 2268811 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1006 02:12:41.762544 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:41.788491 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:41.908249 2268811 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1006 02:12:41.928947 2268811 addons.go:231] Setting addon gcp-auth=true in "addons-891734"
	I1006 02:12:41.929038 2268811 host.go:66] Checking if "addons-891734" exists ...
	I1006 02:12:41.929512 2268811 cli_runner.go:164] Run: docker container inspect addons-891734 --format={{.State.Status}}
	I1006 02:12:41.949264 2268811 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1006 02:12:41.949317 2268811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-891734
	I1006 02:12:41.959717 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:41.974712 2268811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/addons-891734/id_rsa Username:docker}
	I1006 02:12:42.078904 2268811 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1006 02:12:42.080986 2268811 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1006 02:12:42.082976 2268811 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1006 02:12:42.083087 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1006 02:12:42.101766 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:12:42.111725 2268811 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1006 02:12:42.111756 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1006 02:12:42.140366 2268811 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 02:12:42.140394 2268811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1006 02:12:42.171224 2268811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 02:12:42.176449 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:42.203617 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:42.451247 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:42.675279 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:42.706794 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:42.835340 2268811 addons.go:467] Verifying addon gcp-auth=true in "addons-891734"
	I1006 02:12:42.838764 2268811 out.go:177] * Verifying gcp-auth addon...
	I1006 02:12:42.841932 2268811 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1006 02:12:42.863633 2268811 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1006 02:12:42.863699 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:42.872216 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:42.951753 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:43.175983 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:43.203136 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:43.376198 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:43.456019 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:43.675173 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:43.702525 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:43.877197 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:43.951654 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:44.101997 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:12:44.175548 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:44.203503 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:44.376708 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:44.451971 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:44.675564 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:44.703185 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:44.877005 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:44.952230 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:45.176699 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:45.203674 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:45.376663 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:45.451428 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:45.675024 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:45.703835 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:45.876346 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:45.951858 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:46.175021 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:46.202657 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:46.376526 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:46.451433 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:46.600978 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:12:46.674700 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:46.702207 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:46.876374 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:46.951400 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:47.176098 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:47.202426 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:47.376091 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:47.451750 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:47.674354 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:47.702654 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:47.875678 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:47.951535 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:48.175153 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:48.202077 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:48.376365 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:48.451203 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:48.674914 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:48.702612 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:48.875771 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:48.951890 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:49.101415 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:12:49.174956 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:49.202662 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:49.375847 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:49.451464 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:49.675072 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:49.702880 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:49.876542 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:49.951661 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:50.174378 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:50.202665 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:50.376644 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:50.450949 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:50.674602 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:50.703210 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:50.875806 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:50.955778 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:51.101925 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:12:51.175260 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:51.202649 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:51.375797 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:51.451627 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:51.676073 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:51.702484 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:51.876875 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:51.954469 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:52.175024 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:52.202116 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:52.376447 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:52.451438 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:52.674500 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:52.702597 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:52.875924 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:52.953322 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:53.174825 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:53.201884 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:53.376442 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:53.453408 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:53.601045 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:12:53.674041 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:53.702114 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:53.875687 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:53.951582 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:54.174438 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:54.202596 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:54.375808 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:54.454641 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:54.674070 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:54.702148 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:54.875676 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:54.951089 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:55.174750 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:55.201848 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:55.377333 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:55.452128 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:55.604410 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:12:55.675651 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:55.701503 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:55.875913 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:55.951001 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:56.174865 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:56.201758 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:56.376310 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:56.451578 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:56.674874 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:56.703732 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:56.875987 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:56.951371 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:57.174232 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:57.202828 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:57.375977 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:57.451384 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:57.674246 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:57.702511 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:57.876126 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:57.951325 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:58.101472 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:12:58.175072 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:58.202390 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:58.377351 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:58.451272 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:58.674556 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:58.702582 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:58.875871 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:58.951402 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:59.174901 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:59.203537 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:59.375874 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:59.451143 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:12:59.675029 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:12:59.702190 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:12:59.875784 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:12:59.952188 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:00.103080 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:13:00.175557 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:00.203330 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:00.376104 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:00.451884 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:00.674201 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:00.702411 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:00.876013 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:00.951470 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:01.175016 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:01.202327 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:01.375742 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:01.452187 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:01.674254 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:01.702350 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:01.876212 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:01.951410 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:02.174625 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:02.202460 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:02.375641 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:02.461619 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:02.601209 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:13:02.674627 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:02.701688 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:02.880766 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:02.951274 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:03.174854 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:03.202018 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:03.376620 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:03.451599 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:03.674448 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:03.702676 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:03.876010 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:03.951500 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:04.174296 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:04.203082 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:04.376479 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:04.451906 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:04.674707 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:04.701943 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:04.876947 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:04.951337 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:05.101491 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:13:05.174708 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:05.201666 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:05.375948 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:05.451326 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:05.674757 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:05.701967 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:05.877476 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:05.950880 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:06.175123 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:06.202768 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:06.376838 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:06.451306 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:06.674565 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:06.702579 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:06.875677 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:06.952044 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:07.105930 2268811 node_ready.go:58] node "addons-891734" has status "Ready":"False"
	I1006 02:13:07.175478 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:07.204247 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:07.406018 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:07.472628 2268811 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1006 02:13:07.472704 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:07.601496 2268811 node_ready.go:49] node "addons-891734" has status "Ready":"True"
	I1006 02:13:07.601577 2268811 node_ready.go:38] duration metric: took 34.636647155s waiting for node "addons-891734" to be "Ready" ...
	I1006 02:13:07.601603 2268811 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:13:07.613242 2268811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mls87" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:07.682177 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:07.778767 2268811 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1006 02:13:07.778841 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:07.880050 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:07.953163 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:08.249909 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:08.251910 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:08.376655 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:08.452819 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:08.676627 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:08.707994 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:08.877723 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:08.952328 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:09.175702 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:09.203501 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:09.376494 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:09.452682 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:09.654589 2268811 pod_ready.go:102] pod "coredns-5dd5756b68-mls87" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:09.675180 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:09.702826 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:09.876566 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:09.952147 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:10.156952 2268811 pod_ready.go:92] pod "coredns-5dd5756b68-mls87" in "kube-system" namespace has status "Ready":"True"
	I1006 02:13:10.156985 2268811 pod_ready.go:81] duration metric: took 2.543671673s waiting for pod "coredns-5dd5756b68-mls87" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:10.157009 2268811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-891734" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:10.164658 2268811 pod_ready.go:92] pod "etcd-addons-891734" in "kube-system" namespace has status "Ready":"True"
	I1006 02:13:10.164684 2268811 pod_ready.go:81] duration metric: took 7.668887ms waiting for pod "etcd-addons-891734" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:10.164701 2268811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-891734" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:10.170827 2268811 pod_ready.go:92] pod "kube-apiserver-addons-891734" in "kube-system" namespace has status "Ready":"True"
	I1006 02:13:10.170855 2268811 pod_ready.go:81] duration metric: took 6.116589ms waiting for pod "kube-apiserver-addons-891734" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:10.170867 2268811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-891734" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:10.174667 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:10.182224 2268811 pod_ready.go:92] pod "kube-controller-manager-addons-891734" in "kube-system" namespace has status "Ready":"True"
	I1006 02:13:10.182249 2268811 pod_ready.go:81] duration metric: took 11.37484ms waiting for pod "kube-controller-manager-addons-891734" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:10.182263 2268811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c67j7" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:10.204678 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:10.375917 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:10.401614 2268811 pod_ready.go:92] pod "kube-proxy-c67j7" in "kube-system" namespace has status "Ready":"True"
	I1006 02:13:10.401642 2268811 pod_ready.go:81] duration metric: took 219.371949ms waiting for pod "kube-proxy-c67j7" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:10.401654 2268811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-891734" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:10.452945 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:10.675498 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:10.703072 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:10.802021 2268811 pod_ready.go:92] pod "kube-scheduler-addons-891734" in "kube-system" namespace has status "Ready":"True"
	I1006 02:13:10.802048 2268811 pod_ready.go:81] duration metric: took 400.387107ms waiting for pod "kube-scheduler-addons-891734" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:10.802064 2268811 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:10.876235 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:10.952238 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:11.174736 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:11.203111 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:11.376006 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:11.452794 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:11.674436 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:11.703102 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:11.876631 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:11.951996 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:12.176125 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:12.202661 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:12.375795 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:12.453438 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:12.674754 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:12.702582 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:12.876379 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:12.952393 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:13.108142 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:13.174508 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:13.203041 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:13.375607 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:13.452239 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:13.674393 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:13.702893 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:13.876464 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:13.952970 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:14.174652 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:14.203146 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:14.376148 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:14.452860 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:14.674654 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:14.702211 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:14.876052 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:14.952104 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:15.175318 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:15.203282 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:15.376001 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:15.452598 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:15.608622 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:15.675127 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:15.702936 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:15.876103 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:15.952623 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:16.175142 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:16.203085 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:16.376087 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:16.452483 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:16.675365 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:16.703678 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:16.875834 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:16.952142 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:17.175560 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:17.202917 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:17.376505 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:17.452421 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:17.609072 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:17.675115 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:17.703408 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:17.876051 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:17.951825 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:18.174201 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:18.203262 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:18.375853 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:18.452750 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:18.674766 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:18.702338 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:18.876704 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:18.953256 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:19.175353 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:19.204184 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:19.376474 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:19.453381 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:19.675256 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:19.702629 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:19.875861 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:19.952446 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:20.108513 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:20.174963 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:20.202605 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:20.376318 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:20.452769 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:20.676620 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:20.703331 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:20.875816 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:20.951816 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:21.175228 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:21.202499 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:21.376254 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:21.452038 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:21.674784 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:21.702288 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:21.875813 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:21.952369 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:22.174581 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:22.203962 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:22.376617 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:22.452164 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:22.607473 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:22.674758 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:22.703107 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:22.876141 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:22.953002 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:23.174900 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:23.205996 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:23.376406 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:23.452672 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:23.675279 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:23.706351 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:23.875930 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:23.954496 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:24.175190 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:24.203415 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:24.376209 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:24.463283 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:24.607916 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:24.675026 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:24.703482 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:24.876731 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:24.960582 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:25.175984 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:25.202739 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:25.379941 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:25.460130 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:25.675009 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:25.702766 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:25.876371 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:25.953359 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:26.175275 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:26.204029 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:26.376080 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:26.456359 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:26.609965 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:26.675265 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:26.702960 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:26.876213 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:26.952914 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:27.175719 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:27.202450 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:27.376954 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:27.454011 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:27.674796 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:27.702592 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:27.876415 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:27.953681 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:28.175337 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:28.203315 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:28.379571 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:28.458117 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:28.675474 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:28.703518 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:28.876189 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:28.953977 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:29.115345 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:29.181835 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:29.205517 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:29.376695 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:29.463909 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:29.682824 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:29.703195 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:29.876018 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:29.953457 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:30.174878 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:30.203119 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:30.375917 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:30.458105 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:30.675010 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:30.702781 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:30.881072 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:30.956242 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:31.175223 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:31.207417 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:31.378257 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:31.456611 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:31.608154 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:31.675580 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:31.703687 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:31.878457 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:31.955360 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:32.178011 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:32.202626 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:32.377338 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:32.468747 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:32.676055 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:32.703704 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:32.878302 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:32.959972 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:33.175813 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:33.203200 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:33.376844 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:33.455617 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:33.609075 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:33.676361 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:33.703363 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:33.876444 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:33.953621 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:34.176810 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:34.203093 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:34.376539 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:34.459941 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:34.675309 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:34.703524 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:34.877482 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:34.953695 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:35.177851 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:35.208321 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:35.377511 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:35.461301 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:35.610458 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:35.675520 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:35.702972 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:35.876036 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:35.952531 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:36.182937 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:36.205054 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:36.376419 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:36.452273 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:36.675306 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:36.703295 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:36.876589 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:36.955308 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:37.186275 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:37.203803 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:37.377275 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:37.455486 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:37.678162 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:37.704615 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:37.881353 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:37.952479 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:38.110751 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:38.191575 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:38.207362 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:38.376602 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:38.451958 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:38.675459 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:38.703191 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:38.876304 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:38.953869 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:39.175816 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:39.203921 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:39.376101 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:39.457607 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:39.678031 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:39.704343 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:39.876285 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:39.954129 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:40.175672 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:40.203701 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:40.376046 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:40.452044 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:40.608308 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:40.675274 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:40.703691 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:40.876620 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:40.952760 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:41.175590 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:41.206834 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:41.376921 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:41.453709 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:41.674663 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:41.703652 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:41.881812 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:41.952627 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:42.176983 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:42.219953 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:42.377135 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:42.464018 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:42.609267 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:42.674830 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:42.702690 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:42.880471 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:42.954055 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:43.174944 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:43.202896 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:43.381701 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:43.451942 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:43.675024 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:43.703103 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:43.875783 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:43.953770 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:44.174939 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:44.204465 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:44.376560 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:44.452156 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:44.674406 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:44.706875 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:44.876286 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:44.952738 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:45.109605 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:45.176861 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:45.205041 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:45.376402 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:45.453976 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:45.674543 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:45.703161 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:45.875986 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:45.952433 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:46.174906 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:46.204969 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:46.376574 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:46.456949 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:46.676593 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:46.706656 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:46.877188 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:46.959980 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:47.174517 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:47.205675 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:47.376557 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:47.461445 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:47.607980 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:47.675863 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:47.703098 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:47.877021 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:47.955757 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:48.175536 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:48.210856 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:48.376970 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:48.461402 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:48.675663 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:48.709434 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:48.878462 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:48.954373 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:49.176489 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:49.204280 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:49.377130 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:49.465643 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:49.612014 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:49.676346 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:49.706457 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:49.876879 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:49.953434 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:50.176478 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:50.240595 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:50.376683 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:50.468318 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:50.682523 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:50.703229 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:50.876493 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:50.956628 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:51.189786 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:51.203607 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:51.377020 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:51.452082 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:51.675228 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:51.702799 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:51.877514 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:51.953059 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:52.107312 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:52.174201 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:52.202753 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:52.376572 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:52.452623 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:52.674374 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:52.702998 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:52.876059 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:52.951855 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:53.176088 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:53.204018 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:53.376182 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:53.457615 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:53.681879 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:53.705795 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:53.880098 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:53.954622 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:54.110195 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:54.175625 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:54.203198 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:54.376122 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:54.456968 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:54.677581 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:54.704799 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:54.879145 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:54.955764 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:55.176228 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:55.205231 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:55.376431 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:55.456880 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:55.674713 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:55.703425 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:55.878349 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:55.957939 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:56.174583 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:56.203694 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:56.377617 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:56.458042 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:56.607647 2268811 pod_ready.go:102] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"False"
	I1006 02:13:56.675662 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:56.703152 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:56.876295 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:56.952252 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:57.175276 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:57.202872 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:57.376963 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:57.452527 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:57.675205 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:57.704431 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:57.876483 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:57.953010 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:58.112398 2268811 pod_ready.go:92] pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace has status "Ready":"True"
	I1006 02:13:58.112473 2268811 pod_ready.go:81] duration metric: took 47.310400337s waiting for pod "metrics-server-7c66d45ddc-v9pjf" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:58.112502 2268811 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2pwfm" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:58.119851 2268811 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-2pwfm" in "kube-system" namespace has status "Ready":"True"
	I1006 02:13:58.119923 2268811 pod_ready.go:81] duration metric: took 7.39878ms waiting for pod "nvidia-device-plugin-daemonset-2pwfm" in "kube-system" namespace to be "Ready" ...
	I1006 02:13:58.119977 2268811 pod_ready.go:38] duration metric: took 50.518333978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:13:58.120011 2268811 api_server.go:52] waiting for apiserver process to appear ...
	I1006 02:13:58.120065 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 02:13:58.120160 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 02:13:58.174628 2268811 cri.go:89] found id: "27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369"
	I1006 02:13:58.174697 2268811 cri.go:89] found id: ""
	I1006 02:13:58.174720 2268811 logs.go:284] 1 containers: [27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369]
	I1006 02:13:58.174811 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:13:58.178776 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:58.180802 2268811 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 02:13:58.180924 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 02:13:58.203895 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:58.252547 2268811 cri.go:89] found id: "3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f"
	I1006 02:13:58.252617 2268811 cri.go:89] found id: ""
	I1006 02:13:58.252639 2268811 logs.go:284] 1 containers: [3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f]
	I1006 02:13:58.252735 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:13:58.270269 2268811 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 02:13:58.270433 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 02:13:58.376913 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:58.453969 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:58.542070 2268811 cri.go:89] found id: "baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637"
	I1006 02:13:58.542140 2268811 cri.go:89] found id: ""
	I1006 02:13:58.542162 2268811 logs.go:284] 1 containers: [baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637]
	I1006 02:13:58.542256 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:13:58.557545 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 02:13:58.557691 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 02:13:58.677737 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:58.702971 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:58.739800 2268811 cri.go:89] found id: "d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077"
	I1006 02:13:58.739881 2268811 cri.go:89] found id: ""
	I1006 02:13:58.739905 2268811 logs.go:284] 1 containers: [d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077]
	I1006 02:13:58.740008 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:13:58.754680 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 02:13:58.754828 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 02:13:58.876987 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:58.931334 2268811 cri.go:89] found id: "bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5"
	I1006 02:13:58.931409 2268811 cri.go:89] found id: ""
	I1006 02:13:58.931443 2268811 logs.go:284] 1 containers: [bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5]
	I1006 02:13:58.931531 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:13:58.954781 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:58.956263 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 02:13:58.956384 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 02:13:59.192646 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:59.211080 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:59.234676 2268811 cri.go:89] found id: "91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67"
	I1006 02:13:59.234700 2268811 cri.go:89] found id: ""
	I1006 02:13:59.234709 2268811 logs.go:284] 1 containers: [91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67]
	I1006 02:13:59.234764 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:13:59.248165 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 02:13:59.248242 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 02:13:59.351500 2268811 cri.go:89] found id: "5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3"
	I1006 02:13:59.351524 2268811 cri.go:89] found id: ""
	I1006 02:13:59.351532 2268811 logs.go:284] 1 containers: [5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3]
	I1006 02:13:59.351587 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:13:59.363621 2268811 logs.go:123] Gathering logs for kubelet ...
	I1006 02:13:59.363696 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1006 02:13:59.376742 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1006 02:13:59.443153 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.365373    1349 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.443482 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.365495    1349 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.444933 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.377500    1349 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.445176 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.377613    1349 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.446202 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.390685    1349 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.446433 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.390724    1349 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.446631 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.401483    1349 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.446846 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.401523    1349 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.447095 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.401566    1349 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.447553 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.401576    1349 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.451209 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.404375    1349 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.451480 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.404418    1349 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.451696 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.404471    1349 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-891734' and this object
	W1006 02:13:59.451936 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.404482    1349 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-891734' and this object
	I1006 02:13:59.457001 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:13:59.491698 2268811 logs.go:123] Gathering logs for kube-apiserver [27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369] ...
	I1006 02:13:59.491766 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369"
	I1006 02:13:59.573046 2268811 logs.go:123] Gathering logs for kube-proxy [bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5] ...
	I1006 02:13:59.573119 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5"
	I1006 02:13:59.662671 2268811 logs.go:123] Gathering logs for kube-controller-manager [91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67] ...
	I1006 02:13:59.662701 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67"
	I1006 02:13:59.677247 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:13:59.707582 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:13:59.769700 2268811 logs.go:123] Gathering logs for CRI-O ...
	I1006 02:13:59.769739 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 02:13:59.865501 2268811 logs.go:123] Gathering logs for dmesg ...
	I1006 02:13:59.865539 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 02:13:59.876679 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:13:59.890583 2268811 logs.go:123] Gathering logs for describe nodes ...
	I1006 02:13:59.890614 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1006 02:13:59.952530 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:00.172716 2268811 logs.go:123] Gathering logs for etcd [3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f] ...
	I1006 02:14:00.172754 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f"
	I1006 02:14:00.182254 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:00.210583 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:14:00.270841 2268811 logs.go:123] Gathering logs for coredns [baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637] ...
	I1006 02:14:00.270879 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637"
	I1006 02:14:00.334540 2268811 logs.go:123] Gathering logs for kube-scheduler [d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077] ...
	I1006 02:14:00.334569 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077"
	I1006 02:14:00.377396 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:00.418585 2268811 logs.go:123] Gathering logs for kindnet [5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3] ...
	I1006 02:14:00.418675 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3"
	I1006 02:14:00.480810 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:00.495629 2268811 logs.go:123] Gathering logs for container status ...
	I1006 02:14:00.495657 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 02:14:00.603386 2268811 out.go:309] Setting ErrFile to fd 2...
	I1006 02:14:00.603455 2268811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1006 02:14:00.603534 2268811 out.go:239] X Problems detected in kubelet:
	W1006 02:14:00.603574 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.401576    1349 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:00.603610 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.404375    1349 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:00.603657 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.404418    1349 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:00.603695 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.404471    1349 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-891734' and this object
	W1006 02:14:00.603738 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.404482    1349 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-891734' and this object
	I1006 02:14:00.603780 2268811 out.go:309] Setting ErrFile to fd 2...
	I1006 02:14:00.603801 2268811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:14:00.675107 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:00.704270 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:14:00.876985 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:00.954235 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:01.187772 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:01.204824 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:14:01.377456 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:01.473930 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:01.676567 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:01.704224 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:14:01.876397 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:01.955378 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:02.174960 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:02.203581 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 02:14:02.376587 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:02.453214 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:02.675122 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:02.702719 2268811 kapi.go:107] duration metric: took 1m24.593042895s to wait for kubernetes.io/minikube-addons=registry ...
	I1006 02:14:02.876665 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:02.952204 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:03.175517 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:03.376291 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:03.452381 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:03.676056 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:03.876105 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:03.954185 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:04.178113 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:04.375913 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:04.454842 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:04.675457 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:04.882029 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:04.955311 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:05.176070 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:05.376078 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:05.456793 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:05.675475 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:05.877007 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:05.953716 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:06.175294 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:06.381708 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:06.476923 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:06.674894 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:06.876843 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:06.953346 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:07.185402 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:07.376095 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:07.470696 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:07.675296 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:07.876796 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:07.952650 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:08.179962 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:08.376933 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:08.469449 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:08.675071 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:08.876673 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:08.956980 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:09.175459 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:09.376060 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:09.461898 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:09.675293 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:09.877403 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:09.953095 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:10.176527 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:10.377127 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:10.464038 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:10.604992 2268811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:14:10.637948 2268811 api_server.go:72] duration metric: took 1m37.737755822s to wait for apiserver process to appear ...
	I1006 02:14:10.637973 2268811 api_server.go:88] waiting for apiserver healthz status ...
	I1006 02:14:10.638001 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 02:14:10.638068 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 02:14:10.677469 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:10.745623 2268811 cri.go:89] found id: "27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369"
	I1006 02:14:10.745647 2268811 cri.go:89] found id: ""
	I1006 02:14:10.745656 2268811 logs.go:284] 1 containers: [27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369]
	I1006 02:14:10.745710 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:10.759053 2268811 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 02:14:10.759126 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 02:14:10.830430 2268811 cri.go:89] found id: "3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f"
	I1006 02:14:10.830449 2268811 cri.go:89] found id: ""
	I1006 02:14:10.830458 2268811 logs.go:284] 1 containers: [3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f]
	I1006 02:14:10.830511 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:10.837176 2268811 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 02:14:10.844760 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 02:14:10.876533 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:10.897157 2268811 cri.go:89] found id: "baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637"
	I1006 02:14:10.897178 2268811 cri.go:89] found id: ""
	I1006 02:14:10.897186 2268811 logs.go:284] 1 containers: [baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637]
	I1006 02:14:10.897255 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:10.902946 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 02:14:10.903028 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 02:14:10.955176 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:10.959319 2268811 cri.go:89] found id: "d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077"
	I1006 02:14:10.959344 2268811 cri.go:89] found id: ""
	I1006 02:14:10.959359 2268811 logs.go:284] 1 containers: [d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077]
	I1006 02:14:10.959452 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:10.964748 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 02:14:10.964828 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 02:14:11.021014 2268811 cri.go:89] found id: "bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5"
	I1006 02:14:11.021037 2268811 cri.go:89] found id: ""
	I1006 02:14:11.021045 2268811 logs.go:284] 1 containers: [bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5]
	I1006 02:14:11.021099 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:11.026423 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 02:14:11.026510 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 02:14:11.085335 2268811 cri.go:89] found id: "91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67"
	I1006 02:14:11.085359 2268811 cri.go:89] found id: ""
	I1006 02:14:11.085368 2268811 logs.go:284] 1 containers: [91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67]
	I1006 02:14:11.085424 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:11.093298 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 02:14:11.093368 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 02:14:11.169683 2268811 cri.go:89] found id: "5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3"
	I1006 02:14:11.169704 2268811 cri.go:89] found id: ""
	I1006 02:14:11.169712 2268811 logs.go:284] 1 containers: [5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3]
	I1006 02:14:11.169767 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:11.176495 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:11.179408 2268811 logs.go:123] Gathering logs for describe nodes ...
	I1006 02:14:11.179482 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1006 02:14:11.380272 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 02:14:11.385274 2268811 logs.go:123] Gathering logs for kube-apiserver [27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369] ...
	I1006 02:14:11.385308 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369"
	I1006 02:14:11.450627 2268811 logs.go:123] Gathering logs for etcd [3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f] ...
	I1006 02:14:11.455148 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f"
	I1006 02:14:11.461601 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:11.536804 2268811 logs.go:123] Gathering logs for kube-scheduler [d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077] ...
	I1006 02:14:11.536837 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077"
	I1006 02:14:11.593075 2268811 logs.go:123] Gathering logs for container status ...
	I1006 02:14:11.593104 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 02:14:11.676093 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:11.684253 2268811 logs.go:123] Gathering logs for CRI-O ...
	I1006 02:14:11.684286 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 02:14:11.777463 2268811 logs.go:123] Gathering logs for kubelet ...
	I1006 02:14:11.777499 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1006 02:14:11.844075 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.365373    1349 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.844306 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.365495    1349 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.845669 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.377500    1349 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.845851 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.377613    1349 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.846746 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.390685    1349 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.846938 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.390724    1349 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.847110 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.401483    1349 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.847299 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.401523    1349 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.847488 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.401566    1349 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.847699 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.401576    1349 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.847883 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.404375    1349 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.848086 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.404418    1349 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.848286 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.404471    1349 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-891734' and this object
	W1006 02:14:11.848492 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.404482    1349 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-891734' and this object
	I1006 02:14:11.877035 2268811 kapi.go:107] duration metric: took 1m29.035104629s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1006 02:14:11.879322 2268811 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-891734 cluster.
	I1006 02:14:11.881172 2268811 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1006 02:14:11.882924 2268811 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1006 02:14:11.883934 2268811 logs.go:123] Gathering logs for dmesg ...
	I1006 02:14:11.885065 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 02:14:11.907615 2268811 logs.go:123] Gathering logs for coredns [baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637] ...
	I1006 02:14:11.907646 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637"
	I1006 02:14:11.952643 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:11.964328 2268811 logs.go:123] Gathering logs for kube-proxy [bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5] ...
	I1006 02:14:11.964358 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5"
	I1006 02:14:12.019538 2268811 logs.go:123] Gathering logs for kube-controller-manager [91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67] ...
	I1006 02:14:12.019570 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67"
	I1006 02:14:12.100025 2268811 logs.go:123] Gathering logs for kindnet [5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3] ...
	I1006 02:14:12.100061 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3"
	I1006 02:14:12.164544 2268811 out.go:309] Setting ErrFile to fd 2...
	I1006 02:14:12.164567 2268811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1006 02:14:12.164631 2268811 out.go:239] X Problems detected in kubelet:
	W1006 02:14:12.164647 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.401576    1349 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:12.164656 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.404375    1349 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:12.164698 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.404418    1349 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:12.164718 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.404471    1349 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-891734' and this object
	W1006 02:14:12.164726 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.404482    1349 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-891734' and this object
	I1006 02:14:12.164732 2268811 out.go:309] Setting ErrFile to fd 2...
	I1006 02:14:12.164739 2268811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:14:12.174407 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:12.456655 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:12.675745 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:12.952730 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:13.175971 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:13.455282 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:13.675317 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:13.965614 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:14.174884 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:14.451956 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:14.676371 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:14.953009 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:15.174760 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:15.452682 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:15.690110 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:15.953199 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:16.175327 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:16.459916 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:16.674826 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:16.955590 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:17.175631 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:17.453293 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:17.675179 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:17.953933 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:18.174723 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:18.456791 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:18.678347 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:18.952967 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:19.176286 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:19.452805 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:19.674756 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:19.953449 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:20.176133 2268811 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 02:14:20.461335 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:20.694270 2268811 kapi.go:107] duration metric: took 1m41.564345369s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1006 02:14:20.952843 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:21.457383 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:21.952674 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:22.166136 2268811 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1006 02:14:22.176361 2268811 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1006 02:14:22.177731 2268811 api_server.go:141] control plane version: v1.28.2
	I1006 02:14:22.177766 2268811 api_server.go:131] duration metric: took 11.539786206s to wait for apiserver health ...
	I1006 02:14:22.177779 2268811 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 02:14:22.177803 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1006 02:14:22.177866 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1006 02:14:22.226655 2268811 cri.go:89] found id: "27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369"
	I1006 02:14:22.226678 2268811 cri.go:89] found id: ""
	I1006 02:14:22.226686 2268811 logs.go:284] 1 containers: [27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369]
	I1006 02:14:22.226744 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:22.231417 2268811 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1006 02:14:22.231542 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1006 02:14:22.296913 2268811 cri.go:89] found id: "3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f"
	I1006 02:14:22.296938 2268811 cri.go:89] found id: ""
	I1006 02:14:22.296947 2268811 logs.go:284] 1 containers: [3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f]
	I1006 02:14:22.297023 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:22.302824 2268811 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1006 02:14:22.302907 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1006 02:14:22.373914 2268811 cri.go:89] found id: "baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637"
	I1006 02:14:22.373937 2268811 cri.go:89] found id: ""
	I1006 02:14:22.373947 2268811 logs.go:284] 1 containers: [baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637]
	I1006 02:14:22.374008 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:22.378944 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1006 02:14:22.379019 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1006 02:14:22.445207 2268811 cri.go:89] found id: "d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077"
	I1006 02:14:22.445231 2268811 cri.go:89] found id: ""
	I1006 02:14:22.445240 2268811 logs.go:284] 1 containers: [d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077]
	I1006 02:14:22.445297 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:22.458152 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1006 02:14:22.458291 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1006 02:14:22.471022 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:22.538536 2268811 cri.go:89] found id: "bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5"
	I1006 02:14:22.538560 2268811 cri.go:89] found id: ""
	I1006 02:14:22.538569 2268811 logs.go:284] 1 containers: [bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5]
	I1006 02:14:22.538650 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:22.543625 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1006 02:14:22.543695 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1006 02:14:22.600158 2268811 cri.go:89] found id: "91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67"
	I1006 02:14:22.600177 2268811 cri.go:89] found id: ""
	I1006 02:14:22.600185 2268811 logs.go:284] 1 containers: [91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67]
	I1006 02:14:22.600245 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:22.604893 2268811 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1006 02:14:22.604982 2268811 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1006 02:14:22.656632 2268811 cri.go:89] found id: "5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3"
	I1006 02:14:22.656655 2268811 cri.go:89] found id: ""
	I1006 02:14:22.656663 2268811 logs.go:284] 1 containers: [5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3]
	I1006 02:14:22.656719 2268811 ssh_runner.go:195] Run: which crictl
	I1006 02:14:22.661485 2268811 logs.go:123] Gathering logs for kubelet ...
	I1006 02:14:22.661509 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1006 02:14:22.728077 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.365373    1349 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.728293 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.365495    1349 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.729638 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.377500    1349 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.729824 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.377613    1349 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.730708 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.390685    1349 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.730900 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.390724    1349 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.731074 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.401483    1349 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.731262 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.401523    1349 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.731448 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.401566    1349 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.731654 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.401576    1349 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.731838 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.404375    1349 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.732046 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.404418    1349 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.732232 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.404471    1349 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-891734' and this object
	W1006 02:14:22.732437 2268811 logs.go:138] Found kubelet problem: Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.404482    1349 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-891734' and this object
	I1006 02:14:22.770222 2268811 logs.go:123] Gathering logs for describe nodes ...
	I1006 02:14:22.770256 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1006 02:14:22.918805 2268811 logs.go:123] Gathering logs for kube-apiserver [27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369] ...
	I1006 02:14:22.918834 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369"
	I1006 02:14:22.952916 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:23.009674 2268811 logs.go:123] Gathering logs for etcd [3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f] ...
	I1006 02:14:23.009710 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f"
	I1006 02:14:23.068255 2268811 logs.go:123] Gathering logs for kube-scheduler [d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077] ...
	I1006 02:14:23.068290 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077"
	I1006 02:14:23.118918 2268811 logs.go:123] Gathering logs for kube-proxy [bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5] ...
	I1006 02:14:23.118950 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5"
	I1006 02:14:23.172064 2268811 logs.go:123] Gathering logs for kube-controller-manager [91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67] ...
	I1006 02:14:23.172091 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67"
	I1006 02:14:23.243207 2268811 logs.go:123] Gathering logs for kindnet [5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3] ...
	I1006 02:14:23.243240 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3"
	I1006 02:14:23.288049 2268811 logs.go:123] Gathering logs for CRI-O ...
	I1006 02:14:23.288075 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1006 02:14:23.387341 2268811 logs.go:123] Gathering logs for container status ...
	I1006 02:14:23.387444 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1006 02:14:23.452312 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:23.544080 2268811 logs.go:123] Gathering logs for dmesg ...
	I1006 02:14:23.544147 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1006 02:14:23.574457 2268811 logs.go:123] Gathering logs for coredns [baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637] ...
	I1006 02:14:23.574584 2268811 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637"
	I1006 02:14:23.693765 2268811 out.go:309] Setting ErrFile to fd 2...
	I1006 02:14:23.693892 2268811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1006 02:14:23.693985 2268811 out.go:239] X Problems detected in kubelet:
	W1006 02:14:23.694041 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.401576    1349 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:23.694077 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.404375    1349 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:23.694128 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.404418    1349 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-891734" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-891734' and this object
	W1006 02:14:23.694167 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: W1006 02:13:07.404471    1349 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-891734' and this object
	W1006 02:14:23.694572 2268811 out.go:239]   Oct 06 02:13:07 addons-891734 kubelet[1349]: E1006 02:13:07.404482    1349 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-891734" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-891734' and this object
	I1006 02:14:23.694582 2268811 out.go:309] Setting ErrFile to fd 2...
	I1006 02:14:23.694591 2268811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:14:23.952980 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:24.479261 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:24.954680 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:25.453195 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:25.953641 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:26.454494 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:26.951929 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:27.452419 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:27.954417 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:28.452789 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:28.953501 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:29.459471 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:29.953289 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:30.453986 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:30.952085 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:31.461129 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:31.952232 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:32.453029 2268811 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 02:14:32.952112 2268811 kapi.go:107] duration metric: took 1m53.521211537s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1006 02:14:32.954357 2268811 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, metrics-server, inspektor-gadget, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1006 02:14:32.956494 2268811 addons.go:502] enable addons completed in 2m0.291613144s: enabled=[ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner metrics-server inspektor-gadget default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1006 02:14:33.706765 2268811 system_pods.go:59] 18 kube-system pods found
	I1006 02:14:33.706810 2268811 system_pods.go:61] "coredns-5dd5756b68-mls87" [1d0b1ab1-dc34-47a8-8f56-13747d787bdc] Running
	I1006 02:14:33.706817 2268811 system_pods.go:61] "csi-hostpath-attacher-0" [515127e5-5d90-410f-ac30-e60cfce4e93f] Running
	I1006 02:14:33.706822 2268811 system_pods.go:61] "csi-hostpath-resizer-0" [e1e2cd18-1de7-48da-aed5-4f546b291992] Running
	I1006 02:14:33.706829 2268811 system_pods.go:61] "csi-hostpathplugin-4h9hl" [9c6dc7b1-ea02-4e6e-9e9e-a4d92309b2eb] Running
	I1006 02:14:33.706834 2268811 system_pods.go:61] "etcd-addons-891734" [075a1860-200d-40e0-8e0b-9d8c9dccebb9] Running
	I1006 02:14:33.706840 2268811 system_pods.go:61] "kindnet-nkcw9" [c60e55d6-4d94-451b-9fd3-5b0ad54795e6] Running
	I1006 02:14:33.706856 2268811 system_pods.go:61] "kube-apiserver-addons-891734" [aaebdb03-4f04-48be-b104-d4f69a32fb41] Running
	I1006 02:14:33.706869 2268811 system_pods.go:61] "kube-controller-manager-addons-891734" [c7f12407-cd96-49f1-b546-6dcb3e38f994] Running
	I1006 02:14:33.706877 2268811 system_pods.go:61] "kube-ingress-dns-minikube" [c553fbd4-fcc7-42fe-868c-e85cd1ac80cb] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 02:14:33.706884 2268811 system_pods.go:61] "kube-proxy-c67j7" [06d73d28-55f1-4f4f-b9d5-b8f7278f8dfd] Running
	I1006 02:14:33.706893 2268811 system_pods.go:61] "kube-scheduler-addons-891734" [1dc212e1-a1c0-4e23-ab14-18aaaaf5edee] Running
	I1006 02:14:33.706899 2268811 system_pods.go:61] "metrics-server-7c66d45ddc-v9pjf" [83769c9b-3320-4399-a5da-e3f6c6e53442] Running
	I1006 02:14:33.706904 2268811 system_pods.go:61] "nvidia-device-plugin-daemonset-2pwfm" [e2bb64d9-423f-4701-af20-ede29bdaf239] Running
	I1006 02:14:33.706914 2268811 system_pods.go:61] "registry-proxy-jnd69" [bb78ebbe-5710-4e7f-830f-33db94c493b0] Running
	I1006 02:14:33.706920 2268811 system_pods.go:61] "registry-x4gxk" [ea43b86b-677c-480a-9d39-06963a88c8e4] Running
	I1006 02:14:33.706925 2268811 system_pods.go:61] "snapshot-controller-58dbcc7b99-29xdz" [c539aad2-3c5e-4636-9fd9-4c1218392544] Running
	I1006 02:14:33.706938 2268811 system_pods.go:61] "snapshot-controller-58dbcc7b99-pz4b4" [dd831192-88f6-4cb5-b895-1bd4e1af8db1] Running
	I1006 02:14:33.706945 2268811 system_pods.go:61] "storage-provisioner" [42e562e8-01bd-4299-8476-47d2f260e203] Running
	I1006 02:14:33.706954 2268811 system_pods.go:74] duration metric: took 11.529169298s to wait for pod list to return data ...
	I1006 02:14:33.706970 2268811 default_sa.go:34] waiting for default service account to be created ...
	I1006 02:14:33.709615 2268811 default_sa.go:45] found service account: "default"
	I1006 02:14:33.709639 2268811 default_sa.go:55] duration metric: took 2.660847ms for default service account to be created ...
	I1006 02:14:33.709649 2268811 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 02:14:33.719939 2268811 system_pods.go:86] 18 kube-system pods found
	I1006 02:14:33.720010 2268811 system_pods.go:89] "coredns-5dd5756b68-mls87" [1d0b1ab1-dc34-47a8-8f56-13747d787bdc] Running
	I1006 02:14:33.720035 2268811 system_pods.go:89] "csi-hostpath-attacher-0" [515127e5-5d90-410f-ac30-e60cfce4e93f] Running
	I1006 02:14:33.720056 2268811 system_pods.go:89] "csi-hostpath-resizer-0" [e1e2cd18-1de7-48da-aed5-4f546b291992] Running
	I1006 02:14:33.720092 2268811 system_pods.go:89] "csi-hostpathplugin-4h9hl" [9c6dc7b1-ea02-4e6e-9e9e-a4d92309b2eb] Running
	I1006 02:14:33.720122 2268811 system_pods.go:89] "etcd-addons-891734" [075a1860-200d-40e0-8e0b-9d8c9dccebb9] Running
	I1006 02:14:33.720145 2268811 system_pods.go:89] "kindnet-nkcw9" [c60e55d6-4d94-451b-9fd3-5b0ad54795e6] Running
	I1006 02:14:33.720166 2268811 system_pods.go:89] "kube-apiserver-addons-891734" [aaebdb03-4f04-48be-b104-d4f69a32fb41] Running
	I1006 02:14:33.720198 2268811 system_pods.go:89] "kube-controller-manager-addons-891734" [c7f12407-cd96-49f1-b546-6dcb3e38f994] Running
	I1006 02:14:33.720224 2268811 system_pods.go:89] "kube-ingress-dns-minikube" [c553fbd4-fcc7-42fe-868c-e85cd1ac80cb] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 02:14:33.720250 2268811 system_pods.go:89] "kube-proxy-c67j7" [06d73d28-55f1-4f4f-b9d5-b8f7278f8dfd] Running
	I1006 02:14:33.720273 2268811 system_pods.go:89] "kube-scheduler-addons-891734" [1dc212e1-a1c0-4e23-ab14-18aaaaf5edee] Running
	I1006 02:14:33.720306 2268811 system_pods.go:89] "metrics-server-7c66d45ddc-v9pjf" [83769c9b-3320-4399-a5da-e3f6c6e53442] Running
	I1006 02:14:33.720335 2268811 system_pods.go:89] "nvidia-device-plugin-daemonset-2pwfm" [e2bb64d9-423f-4701-af20-ede29bdaf239] Running
	I1006 02:14:33.720357 2268811 system_pods.go:89] "registry-proxy-jnd69" [bb78ebbe-5710-4e7f-830f-33db94c493b0] Running
	I1006 02:14:33.720378 2268811 system_pods.go:89] "registry-x4gxk" [ea43b86b-677c-480a-9d39-06963a88c8e4] Running
	I1006 02:14:33.720412 2268811 system_pods.go:89] "snapshot-controller-58dbcc7b99-29xdz" [c539aad2-3c5e-4636-9fd9-4c1218392544] Running
	I1006 02:14:33.720434 2268811 system_pods.go:89] "snapshot-controller-58dbcc7b99-pz4b4" [dd831192-88f6-4cb5-b895-1bd4e1af8db1] Running
	I1006 02:14:33.720454 2268811 system_pods.go:89] "storage-provisioner" [42e562e8-01bd-4299-8476-47d2f260e203] Running
	I1006 02:14:33.720484 2268811 system_pods.go:126] duration metric: took 10.828606ms to wait for k8s-apps to be running ...
	I1006 02:14:33.720528 2268811 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 02:14:33.720614 2268811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:14:33.735038 2268811 system_svc.go:56] duration metric: took 14.500969ms WaitForService to wait for kubelet.
	I1006 02:14:33.735137 2268811 kubeadm.go:581] duration metric: took 2m0.83495127s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1006 02:14:33.735172 2268811 node_conditions.go:102] verifying NodePressure condition ...
	I1006 02:14:33.738527 2268811 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 02:14:33.738611 2268811 node_conditions.go:123] node cpu capacity is 2
	I1006 02:14:33.738638 2268811 node_conditions.go:105] duration metric: took 3.445348ms to run NodePressure ...
	I1006 02:14:33.738661 2268811 start.go:228] waiting for startup goroutines ...
	I1006 02:14:33.738690 2268811 start.go:233] waiting for cluster config update ...
	I1006 02:14:33.738723 2268811 start.go:242] writing updated cluster config ...
	I1006 02:14:33.739035 2268811 ssh_runner.go:195] Run: rm -f paused
	I1006 02:14:33.920426 2268811 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1006 02:14:33.925516 2268811 out.go:177] * Done! kubectl is now configured to use "addons-891734" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 06 02:17:45 addons-891734 crio[882]: time="2023-10-06 02:17:45.737297842Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=1ca13406-6a0d-4221-afb4-bd420e89eb5c name=/runtime.v1.ImageService/ImageStatus
	Oct 06 02:17:45 addons-891734 crio[882]: time="2023-10-06 02:17:45.737507496Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=1ca13406-6a0d-4221-afb4-bd420e89eb5c name=/runtime.v1.ImageService/ImageStatus
	Oct 06 02:17:45 addons-891734 crio[882]: time="2023-10-06 02:17:45.738367126Z" level=info msg="Creating container: default/hello-world-app-5d77478584-mm6ft/hello-world-app" id=e86091df-368f-42df-ae46-64c0de83b230 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:17:45 addons-891734 crio[882]: time="2023-10-06 02:17:45.738478054Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 06 02:17:45 addons-891734 crio[882]: time="2023-10-06 02:17:45.827695178Z" level=info msg="Created container 635e4b59f2236b19bea16aee27a2edcb89dcb439cf2cb45b43f2628b285570e9: default/hello-world-app-5d77478584-mm6ft/hello-world-app" id=e86091df-368f-42df-ae46-64c0de83b230 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:17:45 addons-891734 crio[882]: time="2023-10-06 02:17:45.828570242Z" level=info msg="Starting container: 635e4b59f2236b19bea16aee27a2edcb89dcb439cf2cb45b43f2628b285570e9" id=beb098ac-3909-4975-92ea-c94602b12b95 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 02:17:45 addons-891734 crio[882]: time="2023-10-06 02:17:45.842164202Z" level=info msg="Started container" PID=8650 containerID=635e4b59f2236b19bea16aee27a2edcb89dcb439cf2cb45b43f2628b285570e9 description=default/hello-world-app-5d77478584-mm6ft/hello-world-app id=beb098ac-3909-4975-92ea-c94602b12b95 name=/runtime.v1.RuntimeService/StartContainer sandboxID=35fa97b15546084fac8ee67390fa3cff284fc910b0e37f22f5f9d28c345e364e
	Oct 06 02:17:45 addons-891734 conmon[8639]: conmon 635e4b59f2236b19bea1 <ninfo>: container 8650 exited with status 1
	Oct 06 02:17:46 addons-891734 crio[882]: time="2023-10-06 02:17:46.374892662Z" level=info msg="Stopping container: d3cb5f139682b6b3946e351e11987430b9c82218c9318ee52cdd823d45834ae0 (timeout: 2s)" id=5594783b-658d-4f89-beea-55c3258dcc26 name=/runtime.v1.RuntimeService/StopContainer
	Oct 06 02:17:46 addons-891734 crio[882]: time="2023-10-06 02:17:46.566395980Z" level=info msg="Removing container: 6d90a4e11e3214cc4c9ed15dee44dd820fc167142d40685497eda16a3be98c3a" id=a220cf81-5fdb-4ad0-b49e-a7da6adba859 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 02:17:46 addons-891734 crio[882]: time="2023-10-06 02:17:46.606956829Z" level=info msg="Removed container 6d90a4e11e3214cc4c9ed15dee44dd820fc167142d40685497eda16a3be98c3a: default/hello-world-app-5d77478584-mm6ft/hello-world-app" id=a220cf81-5fdb-4ad0-b49e-a7da6adba859 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 02:17:48 addons-891734 crio[882]: time="2023-10-06 02:17:48.384667988Z" level=warning msg="Stopping container d3cb5f139682b6b3946e351e11987430b9c82218c9318ee52cdd823d45834ae0 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=5594783b-658d-4f89-beea-55c3258dcc26 name=/runtime.v1.RuntimeService/StopContainer
	Oct 06 02:17:48 addons-891734 conmon[4958]: conmon d3cb5f139682b6b3946e <ninfo>: container 4969 exited with status 137
	Oct 06 02:17:48 addons-891734 crio[882]: time="2023-10-06 02:17:48.543694091Z" level=info msg="Stopped container d3cb5f139682b6b3946e351e11987430b9c82218c9318ee52cdd823d45834ae0: ingress-nginx/ingress-nginx-controller-5c4c674fdc-s66gd/controller" id=5594783b-658d-4f89-beea-55c3258dcc26 name=/runtime.v1.RuntimeService/StopContainer
	Oct 06 02:17:48 addons-891734 crio[882]: time="2023-10-06 02:17:48.544188674Z" level=info msg="Stopping pod sandbox: 4bc74895e51095319a3d9d3fb8c7f793fd7eaa83ef1fcfa4564eaaabb01be3bd" id=53161066-5e7e-4b2d-8780-745d4d4f321e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 06 02:17:48 addons-891734 crio[882]: time="2023-10-06 02:17:48.547551105Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-ZDGN7SRLCTXOXKPW - [0:0]\n:KUBE-HP-WN5SACDYJWVHY3QN - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-ZDGN7SRLCTXOXKPW\n-X KUBE-HP-WN5SACDYJWVHY3QN\nCOMMIT\n"
	Oct 06 02:17:48 addons-891734 crio[882]: time="2023-10-06 02:17:48.549052571Z" level=info msg="Closing host port tcp:80"
	Oct 06 02:17:48 addons-891734 crio[882]: time="2023-10-06 02:17:48.549096896Z" level=info msg="Closing host port tcp:443"
	Oct 06 02:17:48 addons-891734 crio[882]: time="2023-10-06 02:17:48.550627819Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 06 02:17:48 addons-891734 crio[882]: time="2023-10-06 02:17:48.550654626Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 06 02:17:48 addons-891734 crio[882]: time="2023-10-06 02:17:48.550821095Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5c4c674fdc-s66gd Namespace:ingress-nginx ID:4bc74895e51095319a3d9d3fb8c7f793fd7eaa83ef1fcfa4564eaaabb01be3bd UID:b9ce15aa-5921-4cd2-b810-1292c8d62318 NetNS:/var/run/netns/e4b7d860-50e3-49f8-90d4-d4daa6e21a5a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 06 02:17:48 addons-891734 crio[882]: time="2023-10-06 02:17:48.550988065Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5c4c674fdc-s66gd from CNI network \"kindnet\" (type=ptp)"
	Oct 06 02:17:48 addons-891734 crio[882]: time="2023-10-06 02:17:48.572677745Z" level=info msg="Stopped pod sandbox: 4bc74895e51095319a3d9d3fb8c7f793fd7eaa83ef1fcfa4564eaaabb01be3bd" id=53161066-5e7e-4b2d-8780-745d4d4f321e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 06 02:17:49 addons-891734 crio[882]: time="2023-10-06 02:17:49.575160567Z" level=info msg="Removing container: d3cb5f139682b6b3946e351e11987430b9c82218c9318ee52cdd823d45834ae0" id=0c0f5812-2d5f-4782-b832-eea805a43052 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 06 02:17:49 addons-891734 crio[882]: time="2023-10-06 02:17:49.593441036Z" level=info msg="Removed container d3cb5f139682b6b3946e351e11987430b9c82218c9318ee52cdd823d45834ae0: ingress-nginx/ingress-nginx-controller-5c4c674fdc-s66gd/controller" id=0c0f5812-2d5f-4782-b832-eea805a43052 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	635e4b59f2236       97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4                                                             7 seconds ago        Exited              hello-world-app           2                   35fa97b155460       hello-world-app-5d77478584-mm6ft
	26cdf39dbb75d       ghcr.io/headlamp-k8s/headlamp@sha256:44b17c125fc5da7899f2583ca3468a31cc80ea52c9ef2aad503f58d91908e4c1                        About a minute ago   Running             headlamp                  0                   81b7b37c9e328       headlamp-58b88cff49-8mm4d
	850fa3f0d66ab       docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                              2 minutes ago        Running             nginx                     0                   6d8b351689f17       nginx
	40d495ce467fd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago        Running             gcp-auth                  0                   6881844d58c23       gcp-auth-d4c87556c-mg9lt
	b46242eef7f6d       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             3 minutes ago        Running             local-path-provisioner    0                   ee385c4266981       local-path-provisioner-78b46b4d5c-288pl
	07073c0d26661       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   4 minutes ago        Exited              patch                     0                   ffbc10b9c486c       ingress-nginx-admission-patch-lz4n7
	dd9be9419c7e1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   4 minutes ago        Exited              create                    0                   6e98a6ade8fec       ingress-nginx-admission-create-cmxjz
	baadd818084ef       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago        Running             coredns                   0                   ecd6349d0fe56       coredns-5dd5756b68-mls87
	8ab22a60d8526       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago        Running             storage-provisioner       0                   cd7c5c7fc00a1       storage-provisioner
	bf975e8417fd9       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                                             5 minutes ago        Running             kube-proxy                0                   ae1cda58f6931       kube-proxy-c67j7
	5a84cf3db6ffb       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             5 minutes ago        Running             kindnet-cni               0                   c746e1bae884e       kindnet-nkcw9
	91a705de1743c       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                                             5 minutes ago        Running             kube-controller-manager   0                   1c842c0610325       kube-controller-manager-addons-891734
	3e58465ec0dad       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago        Running             etcd                      0                   ce0fdae72a226       etcd-addons-891734
	d8f5c121581ac       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                                             5 minutes ago        Running             kube-scheduler            0                   246ad0f60909f       kube-scheduler-addons-891734
	27c082cd8ae59       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                                             5 minutes ago        Running             kube-apiserver            0                   926c19e41ee65       kube-apiserver-addons-891734
	
	* 
	* ==> coredns [baadd818084efc2a4aabb4ae2b956bd8d563513bd47c92b42ba75462794db637] <==
	* [INFO] 10.244.0.18:44316 - 11800 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061778s
	[INFO] 10.244.0.18:44316 - 64349 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060596s
	[INFO] 10.244.0.18:44316 - 57217 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006104s
	[INFO] 10.244.0.18:44316 - 63208 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063788s
	[INFO] 10.244.0.18:44316 - 7030 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001137791s
	[INFO] 10.244.0.18:44316 - 6365 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00115988s
	[INFO] 10.244.0.18:44316 - 15654 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071165s
	[INFO] 10.244.0.18:37255 - 31361 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000114678s
	[INFO] 10.244.0.18:39165 - 61410 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000052965s
	[INFO] 10.244.0.18:37255 - 57495 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000039952s
	[INFO] 10.244.0.18:39165 - 12153 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000031434s
	[INFO] 10.244.0.18:37255 - 8687 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000066209s
	[INFO] 10.244.0.18:37255 - 43768 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000141049s
	[INFO] 10.244.0.18:39165 - 43837 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000199266s
	[INFO] 10.244.0.18:39165 - 26325 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077705s
	[INFO] 10.244.0.18:37255 - 22094 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056904s
	[INFO] 10.244.0.18:37255 - 39270 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077376s
	[INFO] 10.244.0.18:39165 - 47852 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000169062s
	[INFO] 10.244.0.18:39165 - 38457 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068736s
	[INFO] 10.244.0.18:37255 - 39772 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001297573s
	[INFO] 10.244.0.18:39165 - 3874 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001285782s
	[INFO] 10.244.0.18:39165 - 50281 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000978895s
	[INFO] 10.244.0.18:37255 - 20577 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001151904s
	[INFO] 10.244.0.18:37255 - 19870 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000107908s
	[INFO] 10.244.0.18:39165 - 35175 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000046672s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-891734
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-891734
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=84890cb24d0240d9d992d7c7712ee519ceed4154
	                    minikube.k8s.io/name=addons-891734
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_06T02_12_21_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-891734
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Oct 2023 02:12:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-891734
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Oct 2023 02:17:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Oct 2023 02:16:25 +0000   Fri, 06 Oct 2023 02:12:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Oct 2023 02:16:25 +0000   Fri, 06 Oct 2023 02:12:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Oct 2023 02:16:25 +0000   Fri, 06 Oct 2023 02:12:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Oct 2023 02:16:25 +0000   Fri, 06 Oct 2023 02:13:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-891734
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 845361a49b284867a46b946555afe4bc
	  System UUID:                7cedb657-da4d-416a-a21a-4752cfe58941
	  Boot ID:                    d6810820-8fb1-4098-8489-41f3441712b9
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-mm6ft           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  gcp-auth                    gcp-auth-d4c87556c-mg9lt                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  headlamp                    headlamp-58b88cff49-8mm4d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 coredns-5dd5756b68-mls87                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m19s
	  kube-system                 etcd-addons-891734                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m35s
	  kube-system                 kindnet-nkcw9                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m19s
	  kube-system                 kube-apiserver-addons-891734               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kube-system                 kube-controller-manager-addons-891734      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kube-system                 kube-proxy-c67j7                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-scheduler-addons-891734               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  local-path-storage          local-path-provisioner-78b46b4d5c-288pl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m40s (x8 over 5m40s)  kubelet          Node addons-891734 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s (x8 over 5m40s)  kubelet          Node addons-891734 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s (x8 over 5m40s)  kubelet          Node addons-891734 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m33s                  kubelet          Node addons-891734 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s                  kubelet          Node addons-891734 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s                  kubelet          Node addons-891734 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m21s                  node-controller  Node addons-891734 event: Registered Node addons-891734 in Controller
	  Normal  NodeReady                4m46s                  kubelet          Node addons-891734 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000724] FS-Cache: N-cookie c=0000008a [p=00000081 fl=2 nc=0 na=1]
	[  +0.000881] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000c7d5a016
	[  +0.000977] FS-Cache: N-key=[8] '83683b0000000000'
	[  +0.002761] FS-Cache: Duplicate cookie detected
	[  +0.000666] FS-Cache: O-cookie c=00000084 [p=00000081 fl=226 nc=0 na=1]
	[  +0.000893] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000ac4eed26
	[  +0.000977] FS-Cache: O-key=[8] '83683b0000000000'
	[  +0.000712] FS-Cache: N-cookie c=0000008b [p=00000081 fl=2 nc=0 na=1]
	[  +0.000873] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000829faa01
	[  +0.000986] FS-Cache: N-key=[8] '83683b0000000000'
	[  +2.635876] FS-Cache: Duplicate cookie detected
	[  +0.000696] FS-Cache: O-cookie c=00000082 [p=00000081 fl=226 nc=0 na=1]
	[  +0.000912] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=000000000ec71b23
	[  +0.001054] FS-Cache: O-key=[8] '82683b0000000000'
	[  +0.000673] FS-Cache: N-cookie c=0000008d [p=00000081 fl=2 nc=0 na=1]
	[  +0.000887] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000a2c82d9a
	[  +0.000977] FS-Cache: N-key=[8] '82683b0000000000'
	[  +0.322832] FS-Cache: Duplicate cookie detected
	[  +0.000747] FS-Cache: O-cookie c=00000087 [p=00000081 fl=226 nc=0 na=1]
	[  +0.000953] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=000000008b78ebee
	[  +0.001110] FS-Cache: O-key=[8] '8a683b0000000000'
	[  +0.000713] FS-Cache: N-cookie c=0000008e [p=00000081 fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=0000000019ad38e0
	[  +0.001028] FS-Cache: N-key=[8] '8a683b0000000000'
	[ +32.439348] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [3e58465ec0dad3716a0f6fdcbfeb0e6dd40b06e22263c02ffd01c94a8462da7f] <==
	* {"level":"info","ts":"2023-10-06T02:12:14.171208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-06T02:12:14.171249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-10-06T02:12:14.171303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-10-06T02:12:14.171344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-06T02:12:14.171386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-06T02:12:14.171425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-06T02:12:14.175226Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-891734 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-06T02:12:14.175321Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-06T02:12:14.176315Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-06T02:12:14.176461Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-06T02:12:14.176857Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-06T02:12:14.177196Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-06T02:12:14.179127Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-06T02:12:14.179188Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-06T02:12:14.195843Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-06T02:12:14.235069Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-06T02:12:14.235109Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-10-06T02:12:35.736035Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.645735ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128024271453663358 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns.178b62d5df5a7b63\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns.178b62d5df5a7b63\" value_size:618 lease:8128024271453662635 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-10-06T02:12:35.740764Z","caller":"traceutil/trace.go:171","msg":"trace[1722666794] linearizableReadLoop","detail":"{readStateIndex:390; appliedIndex:389; }","duration":"113.065492ms","start":"2023-10-06T02:12:35.627678Z","end":"2023-10-06T02:12:35.740743Z","steps":["trace[1722666794] 'read index received'  (duration: 272.035µs)","trace[1722666794] 'applied index is now lower than readState.Index'  (duration: 112.790618ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-06T02:12:35.7409Z","caller":"traceutil/trace.go:171","msg":"trace[1438481036] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"173.651236ms","start":"2023-10-06T02:12:35.567237Z","end":"2023-10-06T02:12:35.740889Z","steps":["trace[1438481036] 'process raft request'  (duration: 60.8075ms)","trace[1438481036] 'compare'  (duration: 107.36204ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-06T02:12:35.741298Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.585342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4681"}
	{"level":"info","ts":"2023-10-06T02:12:35.742127Z","caller":"traceutil/trace.go:171","msg":"trace[1896377010] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:377; }","duration":"114.411631ms","start":"2023-10-06T02:12:35.627704Z","end":"2023-10-06T02:12:35.742115Z","steps":["trace[1896377010] 'agreement among raft nodes before linearized reading'  (duration: 113.559135ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-06T02:12:35.741003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.326222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-c67j7\" ","response":"range_response_count:1 size:3426"}
	{"level":"info","ts":"2023-10-06T02:12:35.742469Z","caller":"traceutil/trace.go:171","msg":"trace[2093012599] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-c67j7; range_end:; response_count:1; response_revision:377; }","duration":"114.79733ms","start":"2023-10-06T02:12:35.627652Z","end":"2023-10-06T02:12:35.742449Z","steps":["trace[2093012599] 'agreement among raft nodes before linearized reading'  (duration: 113.263995ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-06T02:12:36.360465Z","caller":"traceutil/trace.go:171","msg":"trace[1198911854] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"128.853898ms","start":"2023-10-06T02:12:36.231588Z","end":"2023-10-06T02:12:36.360442Z","steps":["trace[1198911854] 'process raft request'  (duration: 100.945331ms)","trace[1198911854] 'marshal mvccpb.KeyValue' {req_type:put; key:/registry/events/kube-system/coredns-5dd5756b68.178b62d5fd4b0f32; req_size:701; } (duration: 27.7062ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [40d495ce467fdfcddfba7ee6a95a2b405cc95b602b9040b46cb4791a23488c28] <==
	* 2023/10/06 02:14:11 GCP Auth Webhook started!
	2023/10/06 02:14:44 Ready to marshal response ...
	2023/10/06 02:14:44 Ready to write response ...
	2023/10/06 02:15:07 Ready to marshal response ...
	2023/10/06 02:15:07 Ready to write response ...
	2023/10/06 02:15:11 Ready to marshal response ...
	2023/10/06 02:15:11 Ready to write response ...
	2023/10/06 02:15:30 Ready to marshal response ...
	2023/10/06 02:15:30 Ready to write response ...
	2023/10/06 02:15:48 Ready to marshal response ...
	2023/10/06 02:15:48 Ready to write response ...
	2023/10/06 02:15:48 Ready to marshal response ...
	2023/10/06 02:15:48 Ready to write response ...
	2023/10/06 02:15:55 Ready to marshal response ...
	2023/10/06 02:15:55 Ready to write response ...
	2023/10/06 02:16:08 Ready to marshal response ...
	2023/10/06 02:16:08 Ready to write response ...
	2023/10/06 02:16:08 Ready to marshal response ...
	2023/10/06 02:16:08 Ready to write response ...
	2023/10/06 02:16:08 Ready to marshal response ...
	2023/10/06 02:16:08 Ready to write response ...
	2023/10/06 02:17:27 Ready to marshal response ...
	2023/10/06 02:17:27 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  02:17:54 up 12:00,  0 users,  load average: 0.61, 1.60, 2.19
	Linux addons-891734 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [5a84cf3db6ffb45808d711e27820b13396b233b3c67794d1aaab46bfe1614cd3] <==
	* I1006 02:15:47.302127       1 main.go:227] handling current node
	I1006 02:15:57.306733       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:15:57.306764       1 main.go:227] handling current node
	I1006 02:16:07.319997       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:16:07.320098       1 main.go:227] handling current node
	I1006 02:16:17.332185       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:16:17.332212       1 main.go:227] handling current node
	I1006 02:16:27.337048       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:16:27.337080       1 main.go:227] handling current node
	I1006 02:16:37.341037       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:16:37.341066       1 main.go:227] handling current node
	I1006 02:16:47.349152       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:16:47.349186       1 main.go:227] handling current node
	I1006 02:16:57.353190       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:16:57.353220       1 main.go:227] handling current node
	I1006 02:17:07.363437       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:17:07.363464       1 main.go:227] handling current node
	I1006 02:17:17.374830       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:17:17.374857       1 main.go:227] handling current node
	I1006 02:17:27.383808       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:17:27.383840       1 main.go:227] handling current node
	I1006 02:17:37.388134       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:17:37.388164       1 main.go:227] handling current node
	I1006 02:17:47.401004       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:17:47.401034       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [27c082cd8ae5999ad0f0ca05adbe90847b817c9351c089bc379a187e9d384369] <==
	* I1006 02:15:01.395849       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1006 02:15:01.417123       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1006 02:15:02.454944       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1006 02:15:07.530500       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1006 02:15:07.929526       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.1.17"}
	I1006 02:15:22.505856       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1006 02:15:46.631225       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 02:15:46.631356       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 02:15:46.640793       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 02:15:46.640925       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 02:15:46.652493       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 02:15:46.652619       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 02:15:46.690136       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 02:15:46.690313       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 02:15:46.739117       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 02:15:46.739243       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 02:15:46.767695       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 02:15:46.767808       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 02:15:46.789515       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 02:15:46.790208       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1006 02:15:47.723862       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1006 02:15:47.772684       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1006 02:15:47.833715       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1006 02:16:08.565960       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.89.202"}
	I1006 02:17:27.933295       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.199.169"}
	
	* 
	* ==> kube-controller-manager [91a705de1743c9ffe70d4720f9dac4523e884bbe289aea05416b550ab6249c67] <==
	* W1006 02:16:56.822264       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1006 02:16:56.822299       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1006 02:17:21.237632       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1006 02:17:21.237665       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1006 02:17:27.669774       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1006 02:17:27.691109       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-mm6ft"
	I1006 02:17:27.700142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="30.9631ms"
	I1006 02:17:27.717003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.720377ms"
	I1006 02:17:27.717188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.495µs"
	I1006 02:17:27.731088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.316µs"
	I1006 02:17:30.544014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="37.63µs"
	I1006 02:17:31.539852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.446µs"
	W1006 02:17:32.071482       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1006 02:17:32.071517       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1006 02:17:32.540995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="49.987µs"
	W1006 02:17:40.353059       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1006 02:17:40.353097       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1006 02:17:43.066272       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1006 02:17:43.066307       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1006 02:17:45.340037       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1006 02:17:45.345160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="6.105µs"
	I1006 02:17:45.349976       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1006 02:17:46.587618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="87.132µs"
	W1006 02:17:53.503794       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1006 02:17:53.503916       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [bf975e8417fd962a40c466d2913d0fdd34264e9aead6c69acba465f0c00401f5] <==
	* I1006 02:12:37.979355       1 server_others.go:69] "Using iptables proxy"
	I1006 02:12:38.176542       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1006 02:12:38.555613       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 02:12:38.562496       1 server_others.go:152] "Using iptables Proxier"
	I1006 02:12:38.562644       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1006 02:12:38.562678       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1006 02:12:38.562752       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1006 02:12:38.563125       1 server.go:846] "Version info" version="v1.28.2"
	I1006 02:12:38.563170       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 02:12:38.565280       1 config.go:188] "Starting service config controller"
	I1006 02:12:38.565298       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1006 02:12:38.565316       1 config.go:97] "Starting endpoint slice config controller"
	I1006 02:12:38.565320       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1006 02:12:38.565820       1 config.go:315] "Starting node config controller"
	I1006 02:12:38.565837       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1006 02:12:38.666788       1 shared_informer.go:318] Caches are synced for node config
	I1006 02:12:38.666822       1 shared_informer.go:318] Caches are synced for service config
	I1006 02:12:38.666848       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [d8f5c121581ac399db699f771268c9fd21bab013daa95c65dc53b921e150b077] <==
	* I1006 02:12:17.045437       1 serving.go:348] Generated self-signed cert in-memory
	W1006 02:12:18.764991       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1006 02:12:18.765028       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1006 02:12:18.765039       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1006 02:12:18.765046       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1006 02:12:18.785411       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1006 02:12:18.785444       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 02:12:18.787822       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1006 02:12:18.787984       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 02:12:18.788029       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1006 02:12:18.788051       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1006 02:12:18.802356       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1006 02:12:18.802455       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1006 02:12:19.688649       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 06 02:17:32 addons-891734 kubelet[1349]: I1006 02:17:32.527859    1349 scope.go:117] "RemoveContainer" containerID="6d90a4e11e3214cc4c9ed15dee44dd820fc167142d40685497eda16a3be98c3a"
	Oct 06 02:17:32 addons-891734 kubelet[1349]: E1006 02:17:32.528143    1349 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-mm6ft_default(150419cb-ac5e-4795-9d55-8d177d28dc95)\"" pod="default/hello-world-app-5d77478584-mm6ft" podUID="150419cb-ac5e-4795-9d55-8d177d28dc95"
	Oct 06 02:17:37 addons-891734 kubelet[1349]: I1006 02:17:37.734969    1349 scope.go:117] "RemoveContainer" containerID="dd6fc4ce9a45b9be45d299c22952265b497b80b3ffc7516f88b40a7bfb7d7075"
	Oct 06 02:17:37 addons-891734 kubelet[1349]: E1006 02:17:37.735283    1349 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(c553fbd4-fcc7-42fe-868c-e85cd1ac80cb)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="c553fbd4-fcc7-42fe-868c-e85cd1ac80cb"
	Oct 06 02:17:38 addons-891734 kubelet[1349]: E1006 02:17:38.941081    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4afee1d62526244d9e4b7595494926510483278a8581b82485bfd56e889b891b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4afee1d62526244d9e4b7595494926510483278a8581b82485bfd56e889b891b/diff: no such file or directory, extraDiskErr: <nil>
	Oct 06 02:17:43 addons-891734 kubelet[1349]: I1006 02:17:43.857839    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh9hd\" (UniqueName: \"kubernetes.io/projected/c553fbd4-fcc7-42fe-868c-e85cd1ac80cb-kube-api-access-mh9hd\") pod \"c553fbd4-fcc7-42fe-868c-e85cd1ac80cb\" (UID: \"c553fbd4-fcc7-42fe-868c-e85cd1ac80cb\") "
	Oct 06 02:17:43 addons-891734 kubelet[1349]: I1006 02:17:43.862825    1349 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c553fbd4-fcc7-42fe-868c-e85cd1ac80cb-kube-api-access-mh9hd" (OuterVolumeSpecName: "kube-api-access-mh9hd") pod "c553fbd4-fcc7-42fe-868c-e85cd1ac80cb" (UID: "c553fbd4-fcc7-42fe-868c-e85cd1ac80cb"). InnerVolumeSpecName "kube-api-access-mh9hd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 06 02:17:43 addons-891734 kubelet[1349]: I1006 02:17:43.958886    1349 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mh9hd\" (UniqueName: \"kubernetes.io/projected/c553fbd4-fcc7-42fe-868c-e85cd1ac80cb-kube-api-access-mh9hd\") on node \"addons-891734\" DevicePath \"\""
	Oct 06 02:17:44 addons-891734 kubelet[1349]: I1006 02:17:44.550994    1349 scope.go:117] "RemoveContainer" containerID="dd6fc4ce9a45b9be45d299c22952265b497b80b3ffc7516f88b40a7bfb7d7075"
	Oct 06 02:17:44 addons-891734 kubelet[1349]: E1006 02:17:44.676252    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2447191d092f705104bf69406d485fec66b2a345ac16b7b6e768766fb5bd9969/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2447191d092f705104bf69406d485fec66b2a345ac16b7b6e768766fb5bd9969/diff: no such file or directory, extraDiskErr: <nil>
	Oct 06 02:17:44 addons-891734 kubelet[1349]: I1006 02:17:44.737530    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c553fbd4-fcc7-42fe-868c-e85cd1ac80cb" path="/var/lib/kubelet/pods/c553fbd4-fcc7-42fe-868c-e85cd1ac80cb/volumes"
	Oct 06 02:17:45 addons-891734 kubelet[1349]: I1006 02:17:45.735612    1349 scope.go:117] "RemoveContainer" containerID="6d90a4e11e3214cc4c9ed15dee44dd820fc167142d40685497eda16a3be98c3a"
	Oct 06 02:17:46 addons-891734 kubelet[1349]: I1006 02:17:46.563630    1349 scope.go:117] "RemoveContainer" containerID="6d90a4e11e3214cc4c9ed15dee44dd820fc167142d40685497eda16a3be98c3a"
	Oct 06 02:17:46 addons-891734 kubelet[1349]: I1006 02:17:46.563984    1349 scope.go:117] "RemoveContainer" containerID="635e4b59f2236b19bea16aee27a2edcb89dcb439cf2cb45b43f2628b285570e9"
	Oct 06 02:17:46 addons-891734 kubelet[1349]: E1006 02:17:46.564321    1349 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-mm6ft_default(150419cb-ac5e-4795-9d55-8d177d28dc95)\"" pod="default/hello-world-app-5d77478584-mm6ft" podUID="150419cb-ac5e-4795-9d55-8d177d28dc95"
	Oct 06 02:17:46 addons-891734 kubelet[1349]: I1006 02:17:46.737345    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="12ed1b3d-f600-4d4f-8c18-37083740115e" path="/var/lib/kubelet/pods/12ed1b3d-f600-4d4f-8c18-37083740115e/volumes"
	Oct 06 02:17:46 addons-891734 kubelet[1349]: I1006 02:17:46.737737    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="940139f2-658e-436c-a5f2-6372959a9586" path="/var/lib/kubelet/pods/940139f2-658e-436c-a5f2-6372959a9586/volumes"
	Oct 06 02:17:48 addons-891734 kubelet[1349]: I1006 02:17:48.705304    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9ce15aa-5921-4cd2-b810-1292c8d62318-webhook-cert\") pod \"b9ce15aa-5921-4cd2-b810-1292c8d62318\" (UID: \"b9ce15aa-5921-4cd2-b810-1292c8d62318\") "
	Oct 06 02:17:48 addons-891734 kubelet[1349]: I1006 02:17:48.705364    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4ff7\" (UniqueName: \"kubernetes.io/projected/b9ce15aa-5921-4cd2-b810-1292c8d62318-kube-api-access-p4ff7\") pod \"b9ce15aa-5921-4cd2-b810-1292c8d62318\" (UID: \"b9ce15aa-5921-4cd2-b810-1292c8d62318\") "
	Oct 06 02:17:48 addons-891734 kubelet[1349]: I1006 02:17:48.708529    1349 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ce15aa-5921-4cd2-b810-1292c8d62318-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b9ce15aa-5921-4cd2-b810-1292c8d62318" (UID: "b9ce15aa-5921-4cd2-b810-1292c8d62318"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 06 02:17:48 addons-891734 kubelet[1349]: I1006 02:17:48.709249    1349 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ce15aa-5921-4cd2-b810-1292c8d62318-kube-api-access-p4ff7" (OuterVolumeSpecName: "kube-api-access-p4ff7") pod "b9ce15aa-5921-4cd2-b810-1292c8d62318" (UID: "b9ce15aa-5921-4cd2-b810-1292c8d62318"). InnerVolumeSpecName "kube-api-access-p4ff7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 06 02:17:48 addons-891734 kubelet[1349]: I1006 02:17:48.740677    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b9ce15aa-5921-4cd2-b810-1292c8d62318" path="/var/lib/kubelet/pods/b9ce15aa-5921-4cd2-b810-1292c8d62318/volumes"
	Oct 06 02:17:48 addons-891734 kubelet[1349]: I1006 02:17:48.805895    1349 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p4ff7\" (UniqueName: \"kubernetes.io/projected/b9ce15aa-5921-4cd2-b810-1292c8d62318-kube-api-access-p4ff7\") on node \"addons-891734\" DevicePath \"\""
	Oct 06 02:17:48 addons-891734 kubelet[1349]: I1006 02:17:48.805936    1349 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b9ce15aa-5921-4cd2-b810-1292c8d62318-webhook-cert\") on node \"addons-891734\" DevicePath \"\""
	Oct 06 02:17:49 addons-891734 kubelet[1349]: I1006 02:17:49.573629    1349 scope.go:117] "RemoveContainer" containerID="d3cb5f139682b6b3946e351e11987430b9c82218c9318ee52cdd823d45834ae0"
	
	* 
	* ==> storage-provisioner [8ab22a60d852632d854bdf16b6fc8f673b4e03dd0d57ef0db62bcf29276d4890] <==
	* I1006 02:13:07.906713       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 02:13:07.920606       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 02:13:07.920783       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1006 02:13:07.928229       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 02:13:07.928505       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-891734_e7da60db-e4ca-432b-aee1-3b1bf8c4a1d4!
	I1006 02:13:07.929786       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a99e7680-254d-4388-8225-bd0b3886c2eb", APIVersion:"v1", ResourceVersion:"834", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-891734_e7da60db-e4ca-432b-aee1-3b1bf8c4a1d4 became leader
	I1006 02:13:08.044300       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-891734_e7da60db-e4ca-432b-aee1-3b1bf8c4a1d4!
	E1006 02:15:38.991594       1 controller.go:1050] claim "ff09fc74-d780-4cd9-9fb9-dff5768cf31d" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-891734 -n addons-891734
helpers_test.go:261: (dbg) Run:  kubectl --context addons-891734 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (179.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-923493 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1006 02:24:33.942995 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-923493 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.624859644s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-923493 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-923493 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [312fdcd0-3b32-405d-88e7-9674037cc557] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [312fdcd0-3b32-405d-88e7-9674037cc557] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.014314591s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-923493 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1006 02:25:01.627505 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:26:41.623617 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:26:41.629768 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:26:41.640112 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:26:41.660572 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:26:41.700960 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:26:41.781187 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:26:41.941631 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:26:42.261911 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:26:42.902683 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:26:44.183339 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:26:46.744080 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:26:51.864905 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-923493 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.897459882s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-923493 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-923493 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1006 02:27:02.105765 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.021186669s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-923493 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-923493 addons disable ingress-dns --alsologtostderr -v=1: (2.741250679s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-923493 addons disable ingress --alsologtostderr -v=1
E1006 02:27:22.585978 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-923493 addons disable ingress --alsologtostderr -v=1: (7.539264952s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-923493
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-923493:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55bf4fa2efce21b027c59ce25d5b37f050b15214e936b6b386446ac35c36cde4",
	        "Created": "2023-10-06T02:23:05.625353547Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2297284,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-06T02:23:06.000756881Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/55bf4fa2efce21b027c59ce25d5b37f050b15214e936b6b386446ac35c36cde4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55bf4fa2efce21b027c59ce25d5b37f050b15214e936b6b386446ac35c36cde4/hostname",
	        "HostsPath": "/var/lib/docker/containers/55bf4fa2efce21b027c59ce25d5b37f050b15214e936b6b386446ac35c36cde4/hosts",
	        "LogPath": "/var/lib/docker/containers/55bf4fa2efce21b027c59ce25d5b37f050b15214e936b6b386446ac35c36cde4/55bf4fa2efce21b027c59ce25d5b37f050b15214e936b6b386446ac35c36cde4-json.log",
	        "Name": "/ingress-addon-legacy-923493",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-923493:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-923493",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2bf521aa107966a8d129e351b7d378d15f2c32c3e87f6f0ec8ebcbb8e2effa3d-init/diff:/var/lib/docker/overlay2/ab4f4fc5e8cd2d4bbf1718e21432b9cb0d953b7279be1c1cbb7bd550f03b46dc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2bf521aa107966a8d129e351b7d378d15f2c32c3e87f6f0ec8ebcbb8e2effa3d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2bf521aa107966a8d129e351b7d378d15f2c32c3e87f6f0ec8ebcbb8e2effa3d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2bf521aa107966a8d129e351b7d378d15f2c32c3e87f6f0ec8ebcbb8e2effa3d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-923493",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-923493/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-923493",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-923493",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-923493",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b55f0a20181c4dbef74cd2ba6ca05ece4c06c5f89ffc296012ae44b14b752191",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35279"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35278"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35275"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35277"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35276"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b55f0a20181c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-923493": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "55bf4fa2efce",
	                        "ingress-addon-legacy-923493"
	                    ],
	                    "NetworkID": "862f4920581ff17a347f5363175fdf5d2fe692bd58fa3dc29135fbbc835fe910",
	                    "EndpointID": "066834b8dd202fdc1cae736110f050c4fd542f42022b2cf7bf7f2682d5b5d7c0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-923493 -n ingress-addon-legacy-923493
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-923493 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-923493 logs -n 25: (1.377367515s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| start          | -p functional-642904                 | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | -p functional-642904                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| service        | functional-642904 service list       | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	| service        | functional-642904 service list       | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | -o json                              |                             |         |         |                     |                     |
	| service        | functional-642904 service            | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | --namespace=default --https          |                             |         |         |                     |                     |
	|                | --url hello-node                     |                             |         |         |                     |                     |
	| service        | functional-642904                    | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | service hello-node --url             |                             |         |         |                     |                     |
	|                | --format={{.IP}}                     |                             |         |         |                     |                     |
	| service        | functional-642904 service            | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | hello-node --url                     |                             |         |         |                     |                     |
	| update-context | functional-642904                    | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-642904                    | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-642904                    | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-642904                    | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-642904                    | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-642904 ssh pgrep          | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-642904 image build -t     | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | localhost/my-image:functional-642904 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-642904 image ls           | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	| image          | functional-642904                    | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-642904                    | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| delete         | -p functional-642904                 | functional-642904           | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:22 UTC |
	| start          | -p ingress-addon-legacy-923493       | ingress-addon-legacy-923493 | jenkins | v1.31.2 | 06 Oct 23 02:22 UTC | 06 Oct 23 02:24 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-923493          | ingress-addon-legacy-923493 | jenkins | v1.31.2 | 06 Oct 23 02:24 UTC | 06 Oct 23 02:24 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-923493          | ingress-addon-legacy-923493 | jenkins | v1.31.2 | 06 Oct 23 02:24 UTC | 06 Oct 23 02:24 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-923493          | ingress-addon-legacy-923493 | jenkins | v1.31.2 | 06 Oct 23 02:24 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-923493 ip       | ingress-addon-legacy-923493 | jenkins | v1.31.2 | 06 Oct 23 02:26 UTC | 06 Oct 23 02:26 UTC |
	| addons         | ingress-addon-legacy-923493          | ingress-addon-legacy-923493 | jenkins | v1.31.2 | 06 Oct 23 02:27 UTC | 06 Oct 23 02:27 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-923493          | ingress-addon-legacy-923493 | jenkins | v1.31.2 | 06 Oct 23 02:27 UTC | 06 Oct 23 02:27 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 02:22:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 02:22:47.129461 2296817 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:22:47.130682 2296817 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:22:47.130720 2296817 out.go:309] Setting ErrFile to fd 2...
	I1006 02:22:47.130742 2296817 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:22:47.131174 2296817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:22:47.132004 2296817 out.go:303] Setting JSON to false
	I1006 02:22:47.133020 2296817 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":43514,"bootTime":1696515454,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:22:47.133100 2296817 start.go:138] virtualization:  
	I1006 02:22:47.137017 2296817 out.go:177] * [ingress-addon-legacy-923493] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1006 02:22:47.139228 2296817 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 02:22:47.141297 2296817 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:22:47.139305 2296817 notify.go:220] Checking for updates...
	I1006 02:22:47.143511 2296817 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:22:47.145472 2296817 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:22:47.147200 2296817 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 02:22:47.148895 2296817 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 02:22:47.150948 2296817 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:22:47.176179 2296817 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:22:47.176288 2296817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:22:47.266154 2296817 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-06 02:22:47.256412701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:22:47.266282 2296817 docker.go:295] overlay module found
	I1006 02:22:47.268558 2296817 out.go:177] * Using the docker driver based on user configuration
	I1006 02:22:47.270391 2296817 start.go:298] selected driver: docker
	I1006 02:22:47.270419 2296817 start.go:902] validating driver "docker" against <nil>
	I1006 02:22:47.270432 2296817 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 02:22:47.271114 2296817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:22:47.337423 2296817 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-06 02:22:47.327342368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:22:47.337577 2296817 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1006 02:22:47.337799 2296817 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 02:22:47.340393 2296817 out.go:177] * Using Docker driver with root privileges
	I1006 02:22:47.342619 2296817 cni.go:84] Creating CNI manager for ""
	I1006 02:22:47.342654 2296817 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:22:47.342668 2296817 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 02:22:47.342684 2296817 start_flags.go:323] config:
	{Name:ingress-addon-legacy-923493 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-923493 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:22:47.344812 2296817 out.go:177] * Starting control plane node ingress-addon-legacy-923493 in cluster ingress-addon-legacy-923493
	I1006 02:22:47.346824 2296817 cache.go:122] Beginning downloading kic base image for docker with crio
	I1006 02:22:47.349144 2296817 out.go:177] * Pulling base image ...
	I1006 02:22:47.350919 2296817 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1006 02:22:47.350950 2296817 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1006 02:22:47.368020 2296817 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1006 02:22:47.368043 2296817 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1006 02:22:47.431887 2296817 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1006 02:22:47.431915 2296817 cache.go:57] Caching tarball of preloaded images
	I1006 02:22:47.432102 2296817 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1006 02:22:47.434511 2296817 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1006 02:22:47.436620 2296817 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1006 02:22:47.549427 2296817 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1006 02:22:57.607983 2296817 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1006 02:22:57.608771 2296817 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1006 02:22:58.803904 2296817 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1006 02:22:58.804275 2296817 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/config.json ...
	I1006 02:22:58.804310 2296817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/config.json: {Name:mkde64688bc29005343b6a6c12c126ad1a694b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:22:58.804517 2296817 cache.go:195] Successfully downloaded all kic artifacts
	I1006 02:22:58.804603 2296817 start.go:365] acquiring machines lock for ingress-addon-legacy-923493: {Name:mk6620c8f6fac8155da33fbfa1a5b28b0266b5a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:22:58.804674 2296817 start.go:369] acquired machines lock for "ingress-addon-legacy-923493" in 53.72µs
	I1006 02:22:58.804699 2296817 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-923493 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-923493 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 02:22:58.804775 2296817 start.go:125] createHost starting for "" (driver="docker")
	I1006 02:22:58.807639 2296817 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1006 02:22:58.807929 2296817 start.go:159] libmachine.API.Create for "ingress-addon-legacy-923493" (driver="docker")
	I1006 02:22:58.807979 2296817 client.go:168] LocalClient.Create starting
	I1006 02:22:58.808043 2296817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem
	I1006 02:22:58.808083 2296817 main.go:141] libmachine: Decoding PEM data...
	I1006 02:22:58.808103 2296817 main.go:141] libmachine: Parsing certificate...
	I1006 02:22:58.808165 2296817 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem
	I1006 02:22:58.808189 2296817 main.go:141] libmachine: Decoding PEM data...
	I1006 02:22:58.808203 2296817 main.go:141] libmachine: Parsing certificate...
	I1006 02:22:58.808605 2296817 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-923493 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 02:22:58.826125 2296817 cli_runner.go:211] docker network inspect ingress-addon-legacy-923493 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 02:22:58.826225 2296817 network_create.go:281] running [docker network inspect ingress-addon-legacy-923493] to gather additional debugging logs...
	I1006 02:22:58.826250 2296817 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-923493
	W1006 02:22:58.843822 2296817 cli_runner.go:211] docker network inspect ingress-addon-legacy-923493 returned with exit code 1
	I1006 02:22:58.843855 2296817 network_create.go:284] error running [docker network inspect ingress-addon-legacy-923493]: docker network inspect ingress-addon-legacy-923493: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-923493 not found
	I1006 02:22:58.843869 2296817 network_create.go:286] output of [docker network inspect ingress-addon-legacy-923493]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-923493 not found
	
	** /stderr **
	I1006 02:22:58.843976 2296817 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:22:58.863442 2296817 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004e98e0}
	I1006 02:22:58.863485 2296817 network_create.go:124] attempt to create docker network ingress-addon-legacy-923493 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 02:22:58.863548 2296817 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-923493 ingress-addon-legacy-923493
	I1006 02:22:58.944010 2296817 network_create.go:108] docker network ingress-addon-legacy-923493 192.168.49.0/24 created
	I1006 02:22:58.944047 2296817 kic.go:118] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-923493" container
	I1006 02:22:58.944120 2296817 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 02:22:58.960195 2296817 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-923493 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-923493 --label created_by.minikube.sigs.k8s.io=true
	I1006 02:22:58.978766 2296817 oci.go:103] Successfully created a docker volume ingress-addon-legacy-923493
	I1006 02:22:58.979208 2296817 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-923493-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-923493 --entrypoint /usr/bin/test -v ingress-addon-legacy-923493:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1006 02:23:00.642239 2296817 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-923493-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-923493 --entrypoint /usr/bin/test -v ingress-addon-legacy-923493:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (1.6629787s)
	I1006 02:23:00.642278 2296817 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-923493
	I1006 02:23:00.642298 2296817 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1006 02:23:00.642318 2296817 kic.go:191] Starting extracting preloaded images to volume ...
	I1006 02:23:00.642404 2296817 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-923493:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 02:23:05.543435 2296817 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-923493:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.900983069s)
	I1006 02:23:05.543468 2296817 kic.go:200] duration metric: took 4.901146 seconds to extract preloaded images to volume
	W1006 02:23:05.543620 2296817 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 02:23:05.543731 2296817 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 02:23:05.609207 2296817 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-923493 --name ingress-addon-legacy-923493 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-923493 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-923493 --network ingress-addon-legacy-923493 --ip 192.168.49.2 --volume ingress-addon-legacy-923493:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1006 02:23:06.009299 2296817 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-923493 --format={{.State.Running}}
	I1006 02:23:06.037998 2296817 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-923493 --format={{.State.Status}}
	I1006 02:23:06.067612 2296817 cli_runner.go:164] Run: docker exec ingress-addon-legacy-923493 stat /var/lib/dpkg/alternatives/iptables
	I1006 02:23:06.144921 2296817 oci.go:144] the created container "ingress-addon-legacy-923493" has a running status.
	I1006 02:23:06.144955 2296817 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/ingress-addon-legacy-923493/id_rsa...
	I1006 02:23:06.367676 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/ingress-addon-legacy-923493/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 02:23:06.367775 2296817 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/ingress-addon-legacy-923493/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 02:23:06.398207 2296817 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-923493 --format={{.State.Status}}
	I1006 02:23:06.417915 2296817 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 02:23:06.417935 2296817 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-923493 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 02:23:06.509837 2296817 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-923493 --format={{.State.Status}}
	I1006 02:23:06.529221 2296817 machine.go:88] provisioning docker machine ...
	I1006 02:23:06.529256 2296817 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-923493"
	I1006 02:23:06.529323 2296817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-923493
	I1006 02:23:06.551636 2296817 main.go:141] libmachine: Using SSH client type: native
	I1006 02:23:06.552058 2296817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35279 <nil> <nil>}
	I1006 02:23:06.552071 2296817 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-923493 && echo "ingress-addon-legacy-923493" | sudo tee /etc/hostname
	I1006 02:23:06.552781 2296817 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 02:23:09.699825 2296817 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-923493
	
	I1006 02:23:09.699906 2296817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-923493
	I1006 02:23:09.718810 2296817 main.go:141] libmachine: Using SSH client type: native
	I1006 02:23:09.719335 2296817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35279 <nil> <nil>}
	I1006 02:23:09.719360 2296817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-923493' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-923493/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-923493' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 02:23:09.848855 2296817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 02:23:09.848880 2296817 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17314-2262959/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-2262959/.minikube}
	I1006 02:23:09.848899 2296817 ubuntu.go:177] setting up certificates
	I1006 02:23:09.848908 2296817 provision.go:83] configureAuth start
	I1006 02:23:09.848978 2296817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-923493
	I1006 02:23:09.868279 2296817 provision.go:138] copyHostCerts
	I1006 02:23:09.868321 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:23:09.868351 2296817 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem, removing ...
	I1006 02:23:09.868361 2296817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:23:09.868438 2296817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem (1082 bytes)
	I1006 02:23:09.868524 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:23:09.868555 2296817 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem, removing ...
	I1006 02:23:09.868564 2296817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:23:09.868595 2296817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem (1123 bytes)
	I1006 02:23:09.868649 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:23:09.868671 2296817 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem, removing ...
	I1006 02:23:09.868679 2296817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:23:09.868714 2296817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem (1675 bytes)
	I1006 02:23:09.868763 2296817 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-923493 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-923493]
	I1006 02:23:10.841968 2296817 provision.go:172] copyRemoteCerts
	I1006 02:23:10.842045 2296817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 02:23:10.842085 2296817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-923493
	I1006 02:23:10.867154 2296817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35279 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/ingress-addon-legacy-923493/id_rsa Username:docker}
	I1006 02:23:10.965643 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 02:23:10.965703 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 02:23:10.995351 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 02:23:10.995436 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1006 02:23:11.024072 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 02:23:11.024136 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 02:23:11.053153 2296817 provision.go:86] duration metric: configureAuth took 1.204230733s
	I1006 02:23:11.053178 2296817 ubuntu.go:193] setting minikube options for container-runtime
	I1006 02:23:11.053389 2296817 config.go:182] Loaded profile config "ingress-addon-legacy-923493": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1006 02:23:11.053506 2296817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-923493
	I1006 02:23:11.072182 2296817 main.go:141] libmachine: Using SSH client type: native
	I1006 02:23:11.072614 2296817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35279 <nil> <nil>}
	I1006 02:23:11.072638 2296817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 02:23:11.349674 2296817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 02:23:11.349696 2296817 machine.go:91] provisioned docker machine in 4.82045616s
	I1006 02:23:11.349707 2296817 client.go:171] LocalClient.Create took 12.541717464s
	I1006 02:23:11.349757 2296817 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-923493" took 12.541789293s
	I1006 02:23:11.349771 2296817 start.go:300] post-start starting for "ingress-addon-legacy-923493" (driver="docker")
	I1006 02:23:11.349782 2296817 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 02:23:11.349864 2296817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 02:23:11.349937 2296817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-923493
	I1006 02:23:11.367688 2296817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35279 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/ingress-addon-legacy-923493/id_rsa Username:docker}
	I1006 02:23:11.461977 2296817 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 02:23:11.465999 2296817 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 02:23:11.466038 2296817 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1006 02:23:11.466054 2296817 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1006 02:23:11.466071 2296817 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1006 02:23:11.466084 2296817 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/addons for local assets ...
	I1006 02:23:11.466175 2296817 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/files for local assets ...
	I1006 02:23:11.466284 2296817 filesync.go:149] local asset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> 22683062.pem in /etc/ssl/certs
	I1006 02:23:11.466295 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> /etc/ssl/certs/22683062.pem
	I1006 02:23:11.466456 2296817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 02:23:11.476997 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:23:11.507598 2296817 start.go:303] post-start completed in 157.810666ms
	I1006 02:23:11.507975 2296817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-923493
	I1006 02:23:11.526123 2296817 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/config.json ...
	I1006 02:23:11.526421 2296817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:23:11.526473 2296817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-923493
	I1006 02:23:11.544955 2296817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35279 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/ingress-addon-legacy-923493/id_rsa Username:docker}
	I1006 02:23:11.637286 2296817 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 02:23:11.643119 2296817 start.go:128] duration metric: createHost completed in 12.838328203s
	I1006 02:23:11.643144 2296817 start.go:83] releasing machines lock for "ingress-addon-legacy-923493", held for 12.838456188s
	I1006 02:23:11.643219 2296817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-923493
	I1006 02:23:11.661548 2296817 ssh_runner.go:195] Run: cat /version.json
	I1006 02:23:11.661609 2296817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-923493
	I1006 02:23:11.661557 2296817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 02:23:11.661748 2296817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-923493
	I1006 02:23:11.688511 2296817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35279 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/ingress-addon-legacy-923493/id_rsa Username:docker}
	I1006 02:23:11.693540 2296817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35279 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/ingress-addon-legacy-923493/id_rsa Username:docker}
	I1006 02:23:11.915977 2296817 ssh_runner.go:195] Run: systemctl --version
	I1006 02:23:11.921624 2296817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 02:23:12.073199 2296817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 02:23:12.078836 2296817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:23:12.102365 2296817 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1006 02:23:12.102444 2296817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:23:12.141377 2296817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1006 02:23:12.141399 2296817 start.go:472] detecting cgroup driver to use...
	I1006 02:23:12.141432 2296817 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1006 02:23:12.141482 2296817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 02:23:12.160420 2296817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 02:23:12.174320 2296817 docker.go:198] disabling cri-docker service (if available) ...
	I1006 02:23:12.174390 2296817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 02:23:12.190243 2296817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 02:23:12.207522 2296817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 02:23:12.307241 2296817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 02:23:12.410991 2296817 docker.go:214] disabling docker service ...
	I1006 02:23:12.411085 2296817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 02:23:12.432313 2296817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 02:23:12.446707 2296817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 02:23:12.538581 2296817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 02:23:12.646105 2296817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 02:23:12.659643 2296817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 02:23:12.679032 2296817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1006 02:23:12.679160 2296817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:23:12.690894 2296817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 02:23:12.691002 2296817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:23:12.702515 2296817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:23:12.714590 2296817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:23:12.727366 2296817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 02:23:12.738356 2296817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 02:23:12.748546 2296817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 02:23:12.758751 2296817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 02:23:12.858359 2296817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 02:23:12.989010 2296817 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 02:23:12.989089 2296817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 02:23:12.995441 2296817 start.go:540] Will wait 60s for crictl version
	I1006 02:23:12.995560 2296817 ssh_runner.go:195] Run: which crictl
	I1006 02:23:13.000401 2296817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 02:23:13.046055 2296817 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1006 02:23:13.046158 2296817 ssh_runner.go:195] Run: crio --version
	I1006 02:23:13.090820 2296817 ssh_runner.go:195] Run: crio --version
	I1006 02:23:13.135562 2296817 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1006 02:23:13.137397 2296817 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-923493 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:23:13.154547 2296817 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 02:23:13.159083 2296817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 02:23:13.172187 2296817 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1006 02:23:13.172258 2296817 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 02:23:13.222600 2296817 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1006 02:23:13.222676 2296817 ssh_runner.go:195] Run: which lz4
	I1006 02:23:13.227469 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1006 02:23:13.227566 2296817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1006 02:23:13.232026 2296817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1006 02:23:13.232063 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1006 02:23:15.442580 2296817 crio.go:444] Took 2.215045 seconds to copy over tarball
	I1006 02:23:15.442730 2296817 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1006 02:23:18.126342 2296817 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.683567651s)
	I1006 02:23:18.126371 2296817 crio.go:451] Took 2.683725 seconds to extract the tarball
	I1006 02:23:18.126382 2296817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1006 02:23:18.397029 2296817 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 02:23:18.439473 2296817 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1006 02:23:18.439498 2296817 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1006 02:23:18.439538 2296817 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 02:23:18.439717 2296817 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1006 02:23:18.439802 2296817 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1006 02:23:18.439864 2296817 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1006 02:23:18.439938 2296817 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1006 02:23:18.440009 2296817 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1006 02:23:18.440074 2296817 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1006 02:23:18.440138 2296817 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1006 02:23:18.442483 2296817 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1006 02:23:18.442603 2296817 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1006 02:23:18.442671 2296817 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1006 02:23:18.442717 2296817 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1006 02:23:18.442733 2296817 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1006 02:23:18.442770 2296817 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 02:23:18.442773 2296817 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1006 02:23:18.442896 2296817 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	W1006 02:23:18.877377 2296817 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1006 02:23:18.877567 2296817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1006 02:23:18.879111 2296817 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1006 02:23:18.879292 2296817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1006 02:23:18.903544 2296817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1006 02:23:18.915060 2296817 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1006 02:23:18.915241 2296817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1006 02:23:18.917157 2296817 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1006 02:23:18.917389 2296817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1006 02:23:18.944399 2296817 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1006 02:23:18.944657 2296817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1006 02:23:18.955365 2296817 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1006 02:23:18.955490 2296817 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1006 02:23:18.955571 2296817 ssh_runner.go:195] Run: which crictl
	I1006 02:23:18.978423 2296817 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1006 02:23:18.978517 2296817 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1006 02:23:18.978598 2296817 ssh_runner.go:195] Run: which crictl
	W1006 02:23:18.994804 2296817 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1006 02:23:18.995176 2296817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1006 02:23:19.074594 2296817 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1006 02:23:19.074640 2296817 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1006 02:23:19.074693 2296817 ssh_runner.go:195] Run: which crictl
	W1006 02:23:19.076042 2296817 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1006 02:23:19.076185 2296817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 02:23:19.116918 2296817 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1006 02:23:19.116960 2296817 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1006 02:23:19.117012 2296817 ssh_runner.go:195] Run: which crictl
	I1006 02:23:19.117087 2296817 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1006 02:23:19.117109 2296817 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1006 02:23:19.117131 2296817 ssh_runner.go:195] Run: which crictl
	I1006 02:23:19.117201 2296817 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1006 02:23:19.117220 2296817 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1006 02:23:19.117243 2296817 ssh_runner.go:195] Run: which crictl
	I1006 02:23:19.117304 2296817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1006 02:23:19.117366 2296817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1006 02:23:19.132604 2296817 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1006 02:23:19.132678 2296817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1006 02:23:19.132711 2296817 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1006 02:23:19.132785 2296817 ssh_runner.go:195] Run: which crictl
	I1006 02:23:19.288179 2296817 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1006 02:23:19.288506 2296817 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 02:23:19.288549 2296817 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1006 02:23:19.288559 2296817 ssh_runner.go:195] Run: which crictl
	I1006 02:23:19.288511 2296817 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1006 02:23:19.288453 2296817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1006 02:23:19.288655 2296817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1006 02:23:19.288402 2296817 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1006 02:23:19.288449 2296817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1006 02:23:19.288478 2296817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1006 02:23:19.399398 2296817 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1006 02:23:19.399481 2296817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 02:23:19.399576 2296817 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1006 02:23:19.399625 2296817 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1006 02:23:19.399672 2296817 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1006 02:23:19.460449 2296817 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1006 02:23:19.460554 2296817 cache_images.go:92] LoadImages completed in 1.021042182s
	W1006 02:23:19.460676 2296817 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I1006 02:23:19.460763 2296817 ssh_runner.go:195] Run: crio config
	I1006 02:23:19.516426 2296817 cni.go:84] Creating CNI manager for ""
	I1006 02:23:19.516450 2296817 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:23:19.516499 2296817 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1006 02:23:19.516527 2296817 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-923493 NodeName:ingress-addon-legacy-923493 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1006 02:23:19.516687 2296817 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-923493"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 02:23:19.516777 2296817 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-923493 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-923493 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1006 02:23:19.516849 2296817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1006 02:23:19.527934 2296817 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 02:23:19.528065 2296817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 02:23:19.538770 2296817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1006 02:23:19.559675 2296817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1006 02:23:19.580339 2296817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1006 02:23:19.600752 2296817 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 02:23:19.605267 2296817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 02:23:19.618139 2296817 certs.go:56] Setting up /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493 for IP: 192.168.49.2
	I1006 02:23:19.618171 2296817 certs.go:190] acquiring lock for shared ca certs: {Name:mk4532656c287f04abff160cc5263fddcb69ac4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:23:19.618350 2296817 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key
	I1006 02:23:19.618399 2296817 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key
	I1006 02:23:19.618453 2296817 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.key
	I1006 02:23:19.618468 2296817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt with IP's: []
	I1006 02:23:20.045327 2296817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt ...
	I1006 02:23:20.045359 2296817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: {Name:mked05cd626e9ffc082433de97d94dda8d49706e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:23:20.045567 2296817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.key ...
	I1006 02:23:20.045581 2296817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.key: {Name:mk9e1f6371517f781d4d7664abb04daf64dd3535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:23:20.045685 2296817 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.key.dd3b5fb2
	I1006 02:23:20.045703 2296817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1006 02:23:20.413635 2296817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.crt.dd3b5fb2 ...
	I1006 02:23:20.413665 2296817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.crt.dd3b5fb2: {Name:mk48e99d268c1a85017def546c99c404e75d33c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:23:20.413850 2296817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.key.dd3b5fb2 ...
	I1006 02:23:20.413863 2296817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.key.dd3b5fb2: {Name:mk4521ce88c884dfb67ca48ad197c36f8b2713cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:23:20.413948 2296817 certs.go:337] copying /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.crt
	I1006 02:23:20.414032 2296817 certs.go:341] copying /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.key
	I1006 02:23:20.414092 2296817 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/proxy-client.key
	I1006 02:23:20.414108 2296817 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/proxy-client.crt with IP's: []
	I1006 02:23:21.130044 2296817 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/proxy-client.crt ...
	I1006 02:23:21.130074 2296817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/proxy-client.crt: {Name:mka6aaf881b9160ec4f8b0ce92d072411d12892e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:23:21.130259 2296817 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/proxy-client.key ...
	I1006 02:23:21.130273 2296817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/proxy-client.key: {Name:mk2a1184a123df54e0742b7115b543fc36bd331a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:23:21.130359 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 02:23:21.130389 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 02:23:21.130406 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 02:23:21.130421 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 02:23:21.130435 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 02:23:21.130451 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 02:23:21.130464 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 02:23:21.130481 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 02:23:21.130533 2296817 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem (1338 bytes)
	W1006 02:23:21.130577 2296817 certs.go:433] ignoring /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306_empty.pem, impossibly tiny 0 bytes
	I1006 02:23:21.130588 2296817 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 02:23:21.130613 2296817 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem (1082 bytes)
	I1006 02:23:21.130641 2296817 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem (1123 bytes)
	I1006 02:23:21.130673 2296817 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem (1675 bytes)
	I1006 02:23:21.130724 2296817 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:23:21.130760 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem -> /usr/share/ca-certificates/2268306.pem
	I1006 02:23:21.130780 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> /usr/share/ca-certificates/22683062.pem
	I1006 02:23:21.130791 2296817 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:23:21.131451 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1006 02:23:21.162226 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 02:23:21.190622 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 02:23:21.218844 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 02:23:21.247630 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 02:23:21.275605 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 02:23:21.302334 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 02:23:21.330153 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 02:23:21.358238 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem --> /usr/share/ca-certificates/2268306.pem (1338 bytes)
	I1006 02:23:21.386291 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /usr/share/ca-certificates/22683062.pem (1708 bytes)
	I1006 02:23:21.413976 2296817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 02:23:21.442137 2296817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 02:23:21.463505 2296817 ssh_runner.go:195] Run: openssl version
	I1006 02:23:21.470303 2296817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 02:23:21.481821 2296817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:23:21.486493 2296817 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  6 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:23:21.486584 2296817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:23:21.495032 2296817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 02:23:21.508626 2296817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2268306.pem && ln -fs /usr/share/ca-certificates/2268306.pem /etc/ssl/certs/2268306.pem"
	I1006 02:23:21.520011 2296817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2268306.pem
	I1006 02:23:21.524788 2296817 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  6 02:19 /usr/share/ca-certificates/2268306.pem
	I1006 02:23:21.524881 2296817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2268306.pem
	I1006 02:23:21.533583 2296817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2268306.pem /etc/ssl/certs/51391683.0"
	I1006 02:23:21.545142 2296817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22683062.pem && ln -fs /usr/share/ca-certificates/22683062.pem /etc/ssl/certs/22683062.pem"
	I1006 02:23:21.556713 2296817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22683062.pem
	I1006 02:23:21.561217 2296817 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  6 02:19 /usr/share/ca-certificates/22683062.pem
	I1006 02:23:21.561324 2296817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22683062.pem
	I1006 02:23:21.569663 2296817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22683062.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 02:23:21.581303 2296817 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1006 02:23:21.585687 2296817 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1006 02:23:21.585775 2296817 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-923493 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-923493 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:23:21.585867 2296817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 02:23:21.585928 2296817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 02:23:21.627093 2296817 cri.go:89] found id: ""
	I1006 02:23:21.627210 2296817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 02:23:21.637686 2296817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 02:23:21.648439 2296817 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1006 02:23:21.648553 2296817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 02:23:21.659102 2296817 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 02:23:21.659156 2296817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 02:23:21.714040 2296817 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1006 02:23:21.714096 2296817 kubeadm.go:322] [preflight] Running pre-flight checks
	I1006 02:23:21.763282 2296817 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1006 02:23:21.763354 2296817 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1006 02:23:21.763395 2296817 kubeadm.go:322] OS: Linux
	I1006 02:23:21.763442 2296817 kubeadm.go:322] CGROUPS_CPU: enabled
	I1006 02:23:21.763490 2296817 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1006 02:23:21.763539 2296817 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1006 02:23:21.763587 2296817 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1006 02:23:21.763635 2296817 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1006 02:23:21.763688 2296817 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1006 02:23:21.853105 2296817 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 02:23:21.853211 2296817 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 02:23:21.853303 2296817 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1006 02:23:22.088187 2296817 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 02:23:22.090600 2296817 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 02:23:22.090910 2296817 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1006 02:23:22.198294 2296817 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 02:23:22.200489 2296817 out.go:204]   - Generating certificates and keys ...
	I1006 02:23:22.200669 2296817 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1006 02:23:22.200783 2296817 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1006 02:23:22.569581 2296817 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 02:23:22.882742 2296817 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1006 02:23:23.113731 2296817 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1006 02:23:23.830658 2296817 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1006 02:23:24.061187 2296817 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1006 02:23:24.061362 2296817 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-923493 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 02:23:24.417624 2296817 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1006 02:23:24.418045 2296817 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-923493 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 02:23:24.754847 2296817 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 02:23:24.983870 2296817 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 02:23:25.397367 2296817 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1006 02:23:25.397660 2296817 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 02:23:25.763299 2296817 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 02:23:26.377872 2296817 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 02:23:27.336080 2296817 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 02:23:27.917393 2296817 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 02:23:27.918276 2296817 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 02:23:27.920193 2296817 out.go:204]   - Booting up control plane ...
	I1006 02:23:27.920301 2296817 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 02:23:27.926065 2296817 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 02:23:27.927633 2296817 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 02:23:27.929584 2296817 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 02:23:27.933856 2296817 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1006 02:23:39.939776 2296817 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.003937 seconds
	I1006 02:23:39.939926 2296817 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 02:23:39.961033 2296817 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 02:23:40.480512 2296817 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 02:23:40.480666 2296817 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-923493 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1006 02:23:40.989148 2296817 kubeadm.go:322] [bootstrap-token] Using token: 16h6e8.v0fjoa59k2vkdud1
	I1006 02:23:40.991217 2296817 out.go:204]   - Configuring RBAC rules ...
	I1006 02:23:40.991351 2296817 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 02:23:40.996955 2296817 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 02:23:41.012889 2296817 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 02:23:41.016003 2296817 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 02:23:41.018875 2296817 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 02:23:41.026425 2296817 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 02:23:41.047221 2296817 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 02:23:41.326291 2296817 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1006 02:23:41.425652 2296817 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1006 02:23:41.425670 2296817 kubeadm.go:322] 
	I1006 02:23:41.425728 2296817 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1006 02:23:41.425734 2296817 kubeadm.go:322] 
	I1006 02:23:41.425805 2296817 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1006 02:23:41.425810 2296817 kubeadm.go:322] 
	I1006 02:23:41.425837 2296817 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1006 02:23:41.425892 2296817 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 02:23:41.425940 2296817 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 02:23:41.425945 2296817 kubeadm.go:322] 
	I1006 02:23:41.425994 2296817 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1006 02:23:41.426063 2296817 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 02:23:41.426135 2296817 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 02:23:41.426141 2296817 kubeadm.go:322] 
	I1006 02:23:41.426219 2296817 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 02:23:41.426290 2296817 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1006 02:23:41.426295 2296817 kubeadm.go:322] 
	I1006 02:23:41.426373 2296817 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 16h6e8.v0fjoa59k2vkdud1 \
	I1006 02:23:41.426479 2296817 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:80472cab419bab12b6f684b1710332d34cec39ebaa3946bc81d26724ddc88d38 \
	I1006 02:23:41.426501 2296817 kubeadm.go:322]     --control-plane 
	I1006 02:23:41.426506 2296817 kubeadm.go:322] 
	I1006 02:23:41.426585 2296817 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1006 02:23:41.426590 2296817 kubeadm.go:322] 
	I1006 02:23:41.426666 2296817 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 16h6e8.v0fjoa59k2vkdud1 \
	I1006 02:23:41.426764 2296817 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:80472cab419bab12b6f684b1710332d34cec39ebaa3946bc81d26724ddc88d38 
	I1006 02:23:41.430102 2296817 kubeadm.go:322] W1006 02:23:21.713004    1236 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1006 02:23:41.430308 2296817 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1006 02:23:41.430407 2296817 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 02:23:41.430531 2296817 kubeadm.go:322] W1006 02:23:27.925910    1236 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1006 02:23:41.430646 2296817 kubeadm.go:322] W1006 02:23:27.927503    1236 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1006 02:23:41.430663 2296817 cni.go:84] Creating CNI manager for ""
	I1006 02:23:41.430670 2296817 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:23:41.432925 2296817 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1006 02:23:41.435031 2296817 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 02:23:41.440056 2296817 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1006 02:23:41.440078 2296817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1006 02:23:41.462995 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 02:23:41.929333 2296817 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 02:23:41.929472 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:41.929562 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=84890cb24d0240d9d992d7c7712ee519ceed4154 minikube.k8s.io/name=ingress-addon-legacy-923493 minikube.k8s.io/updated_at=2023_10_06T02_23_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:41.940811 2296817 ops.go:34] apiserver oom_adj: -16
	I1006 02:23:42.100033 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:42.218423 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:42.824248 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:43.324931 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:43.824459 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:44.324894 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:44.824844 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:45.324286 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:45.824482 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:46.324059 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:46.824282 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:47.324737 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:47.824168 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:48.324352 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:48.824300 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:49.324912 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:49.824257 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:50.324735 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:50.824235 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:51.324262 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:51.824293 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:52.324819 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:52.823962 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:53.323998 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:53.824051 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:54.324766 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:54.824260 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:55.324312 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:55.825018 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:56.324014 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:56.824892 2296817 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:23:56.918164 2296817 kubeadm.go:1081] duration metric: took 14.988743785s to wait for elevateKubeSystemPrivileges.
	I1006 02:23:56.918195 2296817 kubeadm.go:406] StartCluster complete in 35.332425143s
	I1006 02:23:56.918213 2296817 settings.go:142] acquiring lock: {Name:mkbf8759b61c125112e0d07f4c53bb4e84a6de4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:23:56.918272 2296817 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:23:56.918979 2296817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/kubeconfig: {Name:mkf2ae4867a3638ac11dac5beae6919a4f83b43a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:23:56.919809 2296817 kapi.go:59] client config for ingress-addon-legacy-923493: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt", KeyFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.key", CAFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a24a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 02:23:56.921478 2296817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 02:23:56.922032 2296817 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1006 02:23:56.922107 2296817 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-923493"
	I1006 02:23:56.922122 2296817 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-923493"
	I1006 02:23:56.922185 2296817 host.go:66] Checking if "ingress-addon-legacy-923493" exists ...
	I1006 02:23:56.922679 2296817 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-923493 --format={{.State.Status}}
	I1006 02:23:56.922828 2296817 cert_rotation.go:137] Starting client certificate rotation controller
	I1006 02:23:56.923016 2296817 config.go:182] Loaded profile config "ingress-addon-legacy-923493": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1006 02:23:56.923073 2296817 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-923493"
	I1006 02:23:56.923089 2296817 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-923493"
	I1006 02:23:56.923350 2296817 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-923493 --format={{.State.Status}}
	I1006 02:23:56.974924 2296817 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 02:23:56.973532 2296817 kapi.go:59] client config for ingress-addon-legacy-923493: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt", KeyFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.key", CAFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a24a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 02:23:56.975273 2296817 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-923493"
	I1006 02:23:56.977287 2296817 host.go:66] Checking if "ingress-addon-legacy-923493" exists ...
	I1006 02:23:56.977307 2296817 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 02:23:56.977347 2296817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 02:23:56.977449 2296817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-923493
	I1006 02:23:56.977787 2296817 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-923493 --format={{.State.Status}}
	I1006 02:23:57.026915 2296817 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-923493" context rescaled to 1 replicas
	I1006 02:23:57.026954 2296817 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 02:23:57.031106 2296817 out.go:177] * Verifying Kubernetes components...
	I1006 02:23:57.033007 2296817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:23:57.038769 2296817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35279 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/ingress-addon-legacy-923493/id_rsa Username:docker}
	I1006 02:23:57.038813 2296817 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 02:23:57.038827 2296817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 02:23:57.038890 2296817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-923493
	I1006 02:23:57.082521 2296817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35279 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/ingress-addon-legacy-923493/id_rsa Username:docker}
	I1006 02:23:57.132363 2296817 kapi.go:59] client config for ingress-addon-legacy-923493: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt", KeyFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.key", CAFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a24a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 02:23:57.132648 2296817 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-923493" to be "Ready" ...
	I1006 02:23:57.132981 2296817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 02:23:57.251356 2296817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 02:23:57.282163 2296817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 02:23:57.492131 2296817 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1006 02:23:57.673736 2296817 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1006 02:23:57.675569 2296817 addons.go:502] enable addons completed in 753.529029ms: enabled=[storage-provisioner default-storageclass]
	I1006 02:23:59.193293 2296817 node_ready.go:58] node "ingress-addon-legacy-923493" has status "Ready":"False"
	I1006 02:24:01.688009 2296817 node_ready.go:58] node "ingress-addon-legacy-923493" has status "Ready":"False"
	I1006 02:24:03.688100 2296817 node_ready.go:58] node "ingress-addon-legacy-923493" has status "Ready":"False"
	I1006 02:24:05.188638 2296817 node_ready.go:49] node "ingress-addon-legacy-923493" has status "Ready":"True"
	I1006 02:24:05.188664 2296817 node_ready.go:38] duration metric: took 8.055991746s waiting for node "ingress-addon-legacy-923493" to be "Ready" ...
	I1006 02:24:05.188676 2296817 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:24:05.196125 2296817 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-xg5qv" in "kube-system" namespace to be "Ready" ...
	I1006 02:24:07.210830 2296817 pod_ready.go:102] pod "coredns-66bff467f8-xg5qv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-06 02:23:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1006 02:24:09.214791 2296817 pod_ready.go:102] pod "coredns-66bff467f8-xg5qv" in "kube-system" namespace has status "Ready":"False"
	I1006 02:24:11.715353 2296817 pod_ready.go:102] pod "coredns-66bff467f8-xg5qv" in "kube-system" namespace has status "Ready":"False"
	I1006 02:24:13.714691 2296817 pod_ready.go:92] pod "coredns-66bff467f8-xg5qv" in "kube-system" namespace has status "Ready":"True"
	I1006 02:24:13.714716 2296817 pod_ready.go:81] duration metric: took 8.518557443s waiting for pod "coredns-66bff467f8-xg5qv" in "kube-system" namespace to be "Ready" ...
	I1006 02:24:13.714729 2296817 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-923493" in "kube-system" namespace to be "Ready" ...
	I1006 02:24:13.719105 2296817 pod_ready.go:92] pod "etcd-ingress-addon-legacy-923493" in "kube-system" namespace has status "Ready":"True"
	I1006 02:24:13.719131 2296817 pod_ready.go:81] duration metric: took 4.390714ms waiting for pod "etcd-ingress-addon-legacy-923493" in "kube-system" namespace to be "Ready" ...
	I1006 02:24:13.719146 2296817 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-923493" in "kube-system" namespace to be "Ready" ...
	I1006 02:24:13.723628 2296817 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-923493" in "kube-system" namespace has status "Ready":"True"
	I1006 02:24:13.723655 2296817 pod_ready.go:81] duration metric: took 4.501231ms waiting for pod "kube-apiserver-ingress-addon-legacy-923493" in "kube-system" namespace to be "Ready" ...
	I1006 02:24:13.723667 2296817 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-923493" in "kube-system" namespace to be "Ready" ...
	I1006 02:24:13.737253 2296817 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-923493" in "kube-system" namespace has status "Ready":"True"
	I1006 02:24:13.737278 2296817 pod_ready.go:81] duration metric: took 13.601841ms waiting for pod "kube-controller-manager-ingress-addon-legacy-923493" in "kube-system" namespace to be "Ready" ...
	I1006 02:24:13.737291 2296817 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gm2bd" in "kube-system" namespace to be "Ready" ...
	I1006 02:24:13.742113 2296817 pod_ready.go:92] pod "kube-proxy-gm2bd" in "kube-system" namespace has status "Ready":"True"
	I1006 02:24:13.742145 2296817 pod_ready.go:81] duration metric: took 4.848375ms waiting for pod "kube-proxy-gm2bd" in "kube-system" namespace to be "Ready" ...
	I1006 02:24:13.742156 2296817 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-923493" in "kube-system" namespace to be "Ready" ...
	I1006 02:24:13.909552 2296817 request.go:629] Waited for 167.280361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-923493
	I1006 02:24:14.109337 2296817 request.go:629] Waited for 197.169206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-923493
	I1006 02:24:14.112210 2296817 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-923493" in "kube-system" namespace has status "Ready":"True"
	I1006 02:24:14.112235 2296817 pod_ready.go:81] duration metric: took 370.05641ms waiting for pod "kube-scheduler-ingress-addon-legacy-923493" in "kube-system" namespace to be "Ready" ...
	I1006 02:24:14.112249 2296817 pod_ready.go:38] duration metric: took 8.923556517s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:24:14.112264 2296817 api_server.go:52] waiting for apiserver process to appear ...
	I1006 02:24:14.112331 2296817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:24:14.126393 2296817 api_server.go:72] duration metric: took 17.099406797s to wait for apiserver process to appear ...
	I1006 02:24:14.126417 2296817 api_server.go:88] waiting for apiserver healthz status ...
	I1006 02:24:14.126435 2296817 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1006 02:24:14.135620 2296817 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1006 02:24:14.136581 2296817 api_server.go:141] control plane version: v1.18.20
	I1006 02:24:14.136609 2296817 api_server.go:131] duration metric: took 10.183643ms to wait for apiserver health ...
	I1006 02:24:14.136618 2296817 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 02:24:14.310032 2296817 request.go:629] Waited for 173.312309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1006 02:24:14.315794 2296817 system_pods.go:59] 8 kube-system pods found
	I1006 02:24:14.315837 2296817 system_pods.go:61] "coredns-66bff467f8-xg5qv" [7e24bce7-caf0-4f6f-ab72-f2ee11361692] Running
	I1006 02:24:14.315846 2296817 system_pods.go:61] "etcd-ingress-addon-legacy-923493" [8c75671d-06d0-47a4-a5ea-c11e3fdf52ad] Running
	I1006 02:24:14.315851 2296817 system_pods.go:61] "kindnet-2lmjb" [ca23fa73-ca78-492b-a451-ba89ae4594f4] Running
	I1006 02:24:14.315890 2296817 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-923493" [fbfe6dca-c422-4269-9039-bac4e85865b9] Running
	I1006 02:24:14.315905 2296817 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-923493" [a265b36c-284e-47f9-82bc-d4e80fe13058] Running
	I1006 02:24:14.315913 2296817 system_pods.go:61] "kube-proxy-gm2bd" [ba854102-60c3-4d08-889d-06b7c69e67db] Running
	I1006 02:24:14.315920 2296817 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-923493" [9765a569-10c2-4f28-b9f7-38ce86d4c1b8] Running
	I1006 02:24:14.315930 2296817 system_pods.go:61] "storage-provisioner" [64b20ae6-044d-44cd-a396-3bbe3873b890] Running
	I1006 02:24:14.315936 2296817 system_pods.go:74] duration metric: took 179.312557ms to wait for pod list to return data ...
	I1006 02:24:14.315953 2296817 default_sa.go:34] waiting for default service account to be created ...
	I1006 02:24:14.509298 2296817 request.go:629] Waited for 193.277738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1006 02:24:14.511688 2296817 default_sa.go:45] found service account: "default"
	I1006 02:24:14.511713 2296817 default_sa.go:55] duration metric: took 195.753672ms for default service account to be created ...
	I1006 02:24:14.511724 2296817 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 02:24:14.710116 2296817 request.go:629] Waited for 198.332014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1006 02:24:14.716912 2296817 system_pods.go:86] 8 kube-system pods found
	I1006 02:24:14.716939 2296817 system_pods.go:89] "coredns-66bff467f8-xg5qv" [7e24bce7-caf0-4f6f-ab72-f2ee11361692] Running
	I1006 02:24:14.716946 2296817 system_pods.go:89] "etcd-ingress-addon-legacy-923493" [8c75671d-06d0-47a4-a5ea-c11e3fdf52ad] Running
	I1006 02:24:14.716951 2296817 system_pods.go:89] "kindnet-2lmjb" [ca23fa73-ca78-492b-a451-ba89ae4594f4] Running
	I1006 02:24:14.716957 2296817 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-923493" [fbfe6dca-c422-4269-9039-bac4e85865b9] Running
	I1006 02:24:14.716963 2296817 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-923493" [a265b36c-284e-47f9-82bc-d4e80fe13058] Running
	I1006 02:24:14.716971 2296817 system_pods.go:89] "kube-proxy-gm2bd" [ba854102-60c3-4d08-889d-06b7c69e67db] Running
	I1006 02:24:14.716982 2296817 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-923493" [9765a569-10c2-4f28-b9f7-38ce86d4c1b8] Running
	I1006 02:24:14.716990 2296817 system_pods.go:89] "storage-provisioner" [64b20ae6-044d-44cd-a396-3bbe3873b890] Running
	I1006 02:24:14.716997 2296817 system_pods.go:126] duration metric: took 205.267921ms to wait for k8s-apps to be running ...
	I1006 02:24:14.717009 2296817 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 02:24:14.717076 2296817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:24:14.731401 2296817 system_svc.go:56] duration metric: took 14.38079ms WaitForService to wait for kubelet.
	I1006 02:24:14.731428 2296817 kubeadm.go:581] duration metric: took 17.704450283s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1006 02:24:14.731448 2296817 node_conditions.go:102] verifying NodePressure condition ...
	I1006 02:24:14.909703 2296817 request.go:629] Waited for 178.171974ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1006 02:24:14.912490 2296817 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 02:24:14.912525 2296817 node_conditions.go:123] node cpu capacity is 2
	I1006 02:24:14.912537 2296817 node_conditions.go:105] duration metric: took 181.083404ms to run NodePressure ...
	I1006 02:24:14.912548 2296817 start.go:228] waiting for startup goroutines ...
	I1006 02:24:14.912556 2296817 start.go:233] waiting for cluster config update ...
	I1006 02:24:14.912566 2296817 start.go:242] writing updated cluster config ...
	I1006 02:24:14.912846 2296817 ssh_runner.go:195] Run: rm -f paused
	I1006 02:24:14.975623 2296817 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I1006 02:24:14.977849 2296817 out.go:177] 
	W1006 02:24:14.979686 2296817 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1006 02:24:14.981572 2296817 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1006 02:24:14.983292 2296817 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-923493" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 06 02:27:16 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:16.715470513Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-vbf7n/hello-world-app" id=d5a5d159-024e-4f03-a873-62fb549f6031 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Oct 06 02:27:16 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:16.715565044Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 06 02:27:16 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:16.810217725Z" level=info msg="Created container dce1de8e4c211b6d415a23e27e2d7a8eccc2cc7b6c5fa78374aba33fea42611e: default/hello-world-app-5f5d8b66bb-vbf7n/hello-world-app" id=d5a5d159-024e-4f03-a873-62fb549f6031 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Oct 06 02:27:16 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:16.811457733Z" level=info msg="Starting container: dce1de8e4c211b6d415a23e27e2d7a8eccc2cc7b6c5fa78374aba33fea42611e" id=302d9066-50ec-47b4-a7ca-388dcd9e3895 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Oct 06 02:27:16 ingress-addon-legacy-923493 conmon[3660]: conmon dce1de8e4c211b6d415a <ninfo>: container 3671 exited with status 1
	Oct 06 02:27:16 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:16.827101730Z" level=info msg="Started container" PID=3671 containerID=dce1de8e4c211b6d415a23e27e2d7a8eccc2cc7b6c5fa78374aba33fea42611e description=default/hello-world-app-5f5d8b66bb-vbf7n/hello-world-app id=302d9066-50ec-47b4-a7ca-388dcd9e3895 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=50571497ffb414b826e14418725d60edccd0cb9cf8a336b9701519e6f6d412d5
	Oct 06 02:27:17 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:17.127367406Z" level=info msg="Removing container: 81eba6bc2dfe6f77b5790ea251ba7ff39f82777e9c39817581905aa486885c6e" id=7a178434-99e9-4ba4-959d-9b8ef57c3e9b name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Oct 06 02:27:17 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:17.175953715Z" level=info msg="Removed container 81eba6bc2dfe6f77b5790ea251ba7ff39f82777e9c39817581905aa486885c6e: default/hello-world-app-5f5d8b66bb-vbf7n/hello-world-app" id=7a178434-99e9-4ba4-959d-9b8ef57c3e9b name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Oct 06 02:27:18 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:18.052180038Z" level=info msg="Stopping container: 2855be5cdea14145ce2a283130038680e9ea9998786602b2dc8300811f2dbecc (timeout: 2s)" id=dc6024eb-c928-4ef8-a30c-4194b6fe905b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 06 02:27:18 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:18.060331928Z" level=info msg="Stopping container: 2855be5cdea14145ce2a283130038680e9ea9998786602b2dc8300811f2dbecc (timeout: 2s)" id=5aa6aafa-2475-4fa2-a0c4-943064b2fb66 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.071295176Z" level=warning msg="Stopping container 2855be5cdea14145ce2a283130038680e9ea9998786602b2dc8300811f2dbecc with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=dc6024eb-c928-4ef8-a30c-4194b6fe905b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 06 02:27:20 ingress-addon-legacy-923493 conmon[2753]: conmon 2855be5cdea14145ce2a <ninfo>: container 2764 exited with status 137
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.256215193Z" level=info msg="Stopped container 2855be5cdea14145ce2a283130038680e9ea9998786602b2dc8300811f2dbecc: ingress-nginx/ingress-nginx-controller-7fcf777cb7-kgvbh/controller" id=5aa6aafa-2475-4fa2-a0c4-943064b2fb66 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.257367725Z" level=info msg="Stopping pod sandbox: 68328b5184abfab61c6e661df3b58f6d53ab626c8e29d85d72357deffac98046" id=12becd0e-efc8-43fc-93c7-86b92d3b98d3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.259037887Z" level=info msg="Stopped container 2855be5cdea14145ce2a283130038680e9ea9998786602b2dc8300811f2dbecc: ingress-nginx/ingress-nginx-controller-7fcf777cb7-kgvbh/controller" id=dc6024eb-c928-4ef8-a30c-4194b6fe905b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.259490744Z" level=info msg="Stopping pod sandbox: 68328b5184abfab61c6e661df3b58f6d53ab626c8e29d85d72357deffac98046" id=7a92fa4e-6413-492c-82e0-cdf439d540c4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.261545135Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-ITOKVHDNQVBTL3UE - [0:0]\n:KUBE-HP-QTPYCV2WZANPU4LG - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-ITOKVHDNQVBTL3UE\n-X KUBE-HP-QTPYCV2WZANPU4LG\nCOMMIT\n"
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.263129429Z" level=info msg="Closing host port tcp:80"
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.263172999Z" level=info msg="Closing host port tcp:443"
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.264377790Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.264401396Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.264543279Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-kgvbh Namespace:ingress-nginx ID:68328b5184abfab61c6e661df3b58f6d53ab626c8e29d85d72357deffac98046 UID:8398c8e5-94a6-4ab4-9572-21ba2939a615 NetNS:/var/run/netns/f2a87b54-f58b-422d-9ad0-4ce7d0888f02 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.264721658Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-kgvbh from CNI network \"kindnet\" (type=ptp)"
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.292654830Z" level=info msg="Stopped pod sandbox: 68328b5184abfab61c6e661df3b58f6d53ab626c8e29d85d72357deffac98046" id=12becd0e-efc8-43fc-93c7-86b92d3b98d3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 06 02:27:20 ingress-addon-legacy-923493 crio[903]: time="2023-10-06 02:27:20.292767134Z" level=info msg="Stopped pod sandbox (already stopped): 68328b5184abfab61c6e661df3b58f6d53ab626c8e29d85d72357deffac98046" id=7a92fa4e-6413-492c-82e0-cdf439d540c4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dce1de8e4c211       97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4                                                   9 seconds ago       Exited              hello-world-app           2                   50571497ffb41       hello-world-app-5f5d8b66bb-vbf7n
	7b996d53325fb       docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                    2 minutes ago       Running             nginx                     0                   8bd4c3e636e25       nginx
	2855be5cdea14       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   68328b5184abf       ingress-nginx-controller-7fcf777cb7-kgvbh
	d2aeca3e10877       a883f7fc35610a84d589cbb450eade9face1d1a8b2cbdafa1690cbffe68cfe88                                                   3 minutes ago       Exited              patch                     1                   967250136b5bf       ingress-nginx-admission-patch-c76hr
	86a6dcffafa12       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   aba7f0336a2e7       ingress-nginx-admission-create-lxjl4
	23278ddf96465       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   d0b9574c65683       storage-provisioner
	1f82d84da4101       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   e9b5a2fd459cb       coredns-66bff467f8-xg5qv
	14f259c907982       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   df4ee32a02e87       kindnet-2lmjb
	11a86a0e68922       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   6e362662afd4c       kube-proxy-gm2bd
	da50f96b0d218       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   01633c52aaae7       kube-scheduler-ingress-addon-legacy-923493
	9ccd99e7ab8e6       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   2200d33971e59       kube-controller-manager-ingress-addon-legacy-923493
	0b72c429af916       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   ee47c7307ab49       etcd-ingress-addon-legacy-923493
	d23fbc0f3aa54       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   d7be30fcdad23       kube-apiserver-ingress-addon-legacy-923493
	
	* 
	* ==> coredns [1f82d84da4101f9a4a183960339c5ae60ff234d76ae80e6712e33d89dab01c17] <==
	* [INFO] 10.244.0.5:57677 - 57037 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000149908s
	[INFO] 10.244.0.5:57677 - 38282 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056575s
	[INFO] 10.244.0.5:57677 - 25048 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001347003s
	[INFO] 10.244.0.5:55050 - 49994 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001781358s
	[INFO] 10.244.0.5:55050 - 54344 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000136846s
	[INFO] 10.244.0.5:57677 - 16039 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001005021s
	[INFO] 10.244.0.5:57677 - 19490 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060283s
	[INFO] 10.244.0.5:36441 - 38599 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000120099s
	[INFO] 10.244.0.5:33234 - 54302 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000190818s
	[INFO] 10.244.0.5:33234 - 45032 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043487s
	[INFO] 10.244.0.5:36441 - 28215 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000188726s
	[INFO] 10.244.0.5:36441 - 47816 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077686s
	[INFO] 10.244.0.5:36441 - 11869 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042429s
	[INFO] 10.244.0.5:33234 - 23185 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035069s
	[INFO] 10.244.0.5:36441 - 56051 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042823s
	[INFO] 10.244.0.5:33234 - 50262 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004471s
	[INFO] 10.244.0.5:33234 - 33449 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044898s
	[INFO] 10.244.0.5:36441 - 48701 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029038s
	[INFO] 10.244.0.5:33234 - 41344 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029662s
	[INFO] 10.244.0.5:36441 - 24990 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001131413s
	[INFO] 10.244.0.5:33234 - 7750 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001266363s
	[INFO] 10.244.0.5:33234 - 34475 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000694884s
	[INFO] 10.244.0.5:36441 - 40658 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001141933s
	[INFO] 10.244.0.5:33234 - 13422 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005573s
	[INFO] 10.244.0.5:36441 - 30107 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000096221s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-923493
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-923493
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=84890cb24d0240d9d992d7c7712ee519ceed4154
	                    minikube.k8s.io/name=ingress-addon-legacy-923493
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_06T02_23_41_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Oct 2023 02:23:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-923493
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Oct 2023 02:27:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Oct 2023 02:27:14 +0000   Fri, 06 Oct 2023 02:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Oct 2023 02:27:14 +0000   Fri, 06 Oct 2023 02:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Oct 2023 02:27:14 +0000   Fri, 06 Oct 2023 02:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Oct 2023 02:27:14 +0000   Fri, 06 Oct 2023 02:24:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-923493
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 75e9cae6700a4841bb676c03bbd560ac
	  System UUID:                7b16fdf4-0c88-4b01-8875-cbe66703d6e9
	  Boot ID:                    d6810820-8fb1-4098-8489-41f3441712b9
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-vbf7n                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-66bff467f8-xg5qv                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m30s
	  kube-system                 etcd-ingress-addon-legacy-923493                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kindnet-2lmjb                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m30s
	  kube-system                 kube-apiserver-ingress-addon-legacy-923493             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-923493    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-proxy-gm2bd                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kube-scheduler-ingress-addon-legacy-923493             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m56s (x5 over 3m57s)  kubelet     Node ingress-addon-legacy-923493 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x5 over 3m57s)  kubelet     Node ingress-addon-legacy-923493 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x4 over 3m57s)  kubelet     Node ingress-addon-legacy-923493 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m42s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m42s                  kubelet     Node ingress-addon-legacy-923493 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s                  kubelet     Node ingress-addon-legacy-923493 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s                  kubelet     Node ingress-addon-legacy-923493 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m28s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m22s                  kubelet     Node ingress-addon-legacy-923493 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001054] FS-Cache: O-key=[8] '276a3b0000000000'
	[  +0.000712] FS-Cache: N-cookie c=0000009c [p=00000093 fl=2 nc=0 na=1]
	[  +0.000923] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000ea8162ba
	[  +0.001002] FS-Cache: N-key=[8] '276a3b0000000000'
	[  +0.002663] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=00000096 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000920] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000fc3db6f4
	[  +0.000983] FS-Cache: O-key=[8] '276a3b0000000000'
	[  +0.000674] FS-Cache: N-cookie c=0000009d [p=00000093 fl=2 nc=0 na=1]
	[  +0.000946] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000b9bd865e
	[  +0.000999] FS-Cache: N-key=[8] '276a3b0000000000'
	[  +2.732427] FS-Cache: Duplicate cookie detected
	[  +0.000701] FS-Cache: O-cookie c=00000094 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000d93f34d8
	[  +0.000995] FS-Cache: O-key=[8] '266a3b0000000000'
	[  +0.000657] FS-Cache: N-cookie c=0000009f [p=00000093 fl=2 nc=0 na=1]
	[  +0.000872] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=000000000c4a9176
	[  +0.000974] FS-Cache: N-key=[8] '266a3b0000000000'
	[  +0.306196] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=00000099 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000922] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=0000000019ad38e0
	[  +0.001027] FS-Cache: O-key=[8] '2e6a3b0000000000'
	[  +0.000669] FS-Cache: N-cookie c=000000a0 [p=00000093 fl=2 nc=0 na=1]
	[  +0.000879] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000ea8162ba
	[  +0.000981] FS-Cache: N-key=[8] '2e6a3b0000000000'
	
	* 
	* ==> etcd [0b72c429af916dcf4f6fc65b0ac36edcdd74186f275cac6ac6e0c86efbf4b6e5] <==
	* raft2023/10/06 02:23:32 INFO: aec36adc501070cc became follower at term 0
	raft2023/10/06 02:23:32 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/10/06 02:23:32 INFO: aec36adc501070cc became follower at term 1
	raft2023/10/06 02:23:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-06 02:23:32.450718 W | auth: simple token is not cryptographically signed
	2023-10-06 02:23:32.460198 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-06 02:23:32.501154 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-06 02:23:32.611292 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-06 02:23:32.631360 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-06 02:23:32.647586 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/10/06 02:23:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-06 02:23:32.711346 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/10/06 02:23:33 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/06 02:23:33 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/06 02:23:33 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/06 02:23:33 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/06 02:23:33 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-06 02:23:33.455493 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-06 02:23:33.480475 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-06 02:23:33.480725 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-06 02:23:33.480781 I | etcdserver: published {Name:ingress-addon-legacy-923493 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-06 02:23:33.480812 I | embed: ready to serve client requests
	2023-10-06 02:23:33.583244 I | embed: ready to serve client requests
	2023-10-06 02:23:33.587489 I | embed: serving client requests on 192.168.49.2:2379
	2023-10-06 02:23:33.751074 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  02:27:26 up 12:09,  0 users,  load average: 0.76, 1.13, 1.74
	Linux ingress-addon-legacy-923493 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [14f259c9079828d5a1a3482cafc0aff1327fa9e0154f36ab1cf58169cedb0955] <==
	* I1006 02:25:21.102521       1 main.go:227] handling current node
	I1006 02:25:31.112941       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:25:31.112980       1 main.go:227] handling current node
	I1006 02:25:41.125390       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:25:41.125419       1 main.go:227] handling current node
	I1006 02:25:51.129704       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:25:51.129734       1 main.go:227] handling current node
	I1006 02:26:01.133938       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:26:01.133969       1 main.go:227] handling current node
	I1006 02:26:11.142549       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:26:11.142579       1 main.go:227] handling current node
	I1006 02:26:21.146162       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:26:21.146189       1 main.go:227] handling current node
	I1006 02:26:31.156371       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:26:31.156402       1 main.go:227] handling current node
	I1006 02:26:41.159691       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:26:41.159723       1 main.go:227] handling current node
	I1006 02:26:51.169761       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:26:51.169793       1 main.go:227] handling current node
	I1006 02:27:01.226375       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:27:01.226401       1 main.go:227] handling current node
	I1006 02:27:11.234270       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:27:11.234300       1 main.go:227] handling current node
	I1006 02:27:21.240210       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1006 02:27:21.240240       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [d23fbc0f3aa5423c644ba9fddb7a7801207822f03fe40d7533bf7036b1eec460] <==
	* I1006 02:23:38.239568       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1006 02:23:38.239707       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I1006 02:23:38.341771       1 cache.go:39] Caches are synced for autoregister controller
	I1006 02:23:38.345295       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1006 02:23:38.345342       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 02:23:38.345364       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1006 02:23:38.348322       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1006 02:23:39.131803       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1006 02:23:39.131849       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1006 02:23:39.141736       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1006 02:23:39.146442       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1006 02:23:39.146472       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1006 02:23:39.569930       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 02:23:39.609518       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1006 02:23:39.674201       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1006 02:23:39.675283       1 controller.go:609] quota admission added evaluator for: endpoints
	I1006 02:23:39.679164       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 02:23:40.591610       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1006 02:23:41.304220       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1006 02:23:41.409525       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1006 02:23:44.682500       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 02:23:56.200050       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1006 02:23:56.447417       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1006 02:24:15.889485       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1006 02:24:38.714406       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [9ccd99e7ab8e66e8bcae0d091d09fab0a67e46dcb6369709dee4211f3a3ea6bf] <==
	* t{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-lo
g", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400199d770), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40019bf5a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40001d9a40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1
.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000c31720)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40019bf5f0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1006 02:23:56.568595       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1006 02:23:56.593698       1 shared_informer.go:230] Caches are synced for stateful set 
	I1006 02:23:56.596249       1 shared_informer.go:230] Caches are synced for resource quota 
	I1006 02:23:56.642324       1 shared_informer.go:230] Caches are synced for disruption 
	I1006 02:23:56.642426       1 disruption.go:339] Sending events to api server.
	I1006 02:23:56.643439       1 shared_informer.go:230] Caches are synced for service account 
	I1006 02:23:56.672055       1 shared_informer.go:230] Caches are synced for namespace 
	I1006 02:23:56.681359       1 shared_informer.go:230] Caches are synced for resource quota 
	I1006 02:23:56.747842       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1006 02:23:56.747996       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1006 02:23:57.013664       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1b37ee2c-5c3c-4c0a-89a5-cafa08dc2b7b", APIVersion:"apps/v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1006 02:23:57.093479       1 request.go:621] Throttling request took 1.043814647s, request: GET:https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	I1006 02:23:57.136453       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6f7d31c4-daf1-482f-b82b-475d65a51d7e", APIVersion:"apps/v1", ResourceVersion:"369", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-zndvp
	I1006 02:23:57.543643       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1006 02:23:57.543775       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1006 02:24:06.235189       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1006 02:24:15.842820       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"682904ff-d9b3-411c-b827-34d6a64e185d", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1006 02:24:15.865530       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"7672aa67-c017-46c5-9403-09aadf84c7f0", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-kgvbh
	I1006 02:24:15.916315       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"976190dc-0f6c-4e76-96b5-08fcf32bc5c7", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-lxjl4
	I1006 02:24:15.932232       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6d768db0-1ab9-4bf4-b7d9-cd530c91d0a6", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-c76hr
	I1006 02:24:18.830230       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"976190dc-0f6c-4e76-96b5-08fcf32bc5c7", APIVersion:"batch/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1006 02:24:19.807157       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6d768db0-1ab9-4bf4-b7d9-cd530c91d0a6", APIVersion:"batch/v1", ResourceVersion:"503", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1006 02:26:59.028906       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"75340506-1631-437f-9008-1e21b76d3d29", APIVersion:"apps/v1", ResourceVersion:"720", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1006 02:26:59.047303       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"5690b595-2f4d-4798-be05-93647ad470de", APIVersion:"apps/v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-vbf7n
	
	* 
	* ==> kube-proxy [11a86a0e6892203bb2123df16a8de00c2835fabdfa17573876a9833744ce2848] <==
	* W1006 02:23:58.196904       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1006 02:23:58.209912       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1006 02:23:58.209963       1 server_others.go:186] Using iptables Proxier.
	I1006 02:23:58.210353       1 server.go:583] Version: v1.18.20
	I1006 02:23:58.213111       1 config.go:315] Starting service config controller
	I1006 02:23:58.213166       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1006 02:23:58.213588       1 config.go:133] Starting endpoints config controller
	I1006 02:23:58.213603       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1006 02:23:58.313512       1 shared_informer.go:230] Caches are synced for service config 
	I1006 02:23:58.313982       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [da50f96b0d218e3588a0bf45f6808aca7438b3856c713608215cc7b2921de23c] <==
	* I1006 02:23:38.364190       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1006 02:23:38.364303       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1006 02:23:38.369693       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1006 02:23:38.369736       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1006 02:23:38.370940       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1006 02:23:38.371089       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1006 02:23:38.384738       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1006 02:23:38.394474       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1006 02:23:38.394937       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1006 02:23:38.395092       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1006 02:23:38.395362       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1006 02:23:38.395464       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1006 02:23:38.395697       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1006 02:23:38.395822       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1006 02:23:38.396950       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1006 02:23:38.397396       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1006 02:23:38.397486       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1006 02:23:38.402378       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1006 02:23:39.283092       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1006 02:23:39.385915       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1006 02:23:39.395422       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1006 02:23:39.429308       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1006 02:23:41.969894       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1006 02:23:56.250489       1 factory.go:503] pod: kube-system/coredns-66bff467f8-zndvp is already present in the active queue
	E1006 02:23:56.271478       1 factory.go:503] pod: kube-system/coredns-66bff467f8-xg5qv is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Oct 06 02:27:04 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:04.098574    1615 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 93f9f4be1f134e4f0194113700ae1ca09a2a2f19c615a76e4aed3c8a845e4324
	Oct 06 02:27:04 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:04.098823    1615 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 81eba6bc2dfe6f77b5790ea251ba7ff39f82777e9c39817581905aa486885c6e
	Oct 06 02:27:04 ingress-addon-legacy-923493 kubelet[1615]: E1006 02:27:04.099319    1615 pod_workers.go:191] Error syncing pod 017172e5-fb6e-4964-bbe6-cf5cf04912a7 ("hello-world-app-5f5d8b66bb-vbf7n_default(017172e5-fb6e-4964-bbe6-cf5cf04912a7)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-vbf7n_default(017172e5-fb6e-4964-bbe6-cf5cf04912a7)"
	Oct 06 02:27:05 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:05.101718    1615 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 81eba6bc2dfe6f77b5790ea251ba7ff39f82777e9c39817581905aa486885c6e
	Oct 06 02:27:05 ingress-addon-legacy-923493 kubelet[1615]: E1006 02:27:05.102101    1615 pod_workers.go:191] Error syncing pod 017172e5-fb6e-4964-bbe6-cf5cf04912a7 ("hello-world-app-5f5d8b66bb-vbf7n_default(017172e5-fb6e-4964-bbe6-cf5cf04912a7)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-vbf7n_default(017172e5-fb6e-4964-bbe6-cf5cf04912a7)"
	Oct 06 02:27:12 ingress-addon-legacy-923493 kubelet[1615]: E1006 02:27:12.712188    1615 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 06 02:27:12 ingress-addon-legacy-923493 kubelet[1615]: E1006 02:27:12.712239    1615 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 06 02:27:12 ingress-addon-legacy-923493 kubelet[1615]: E1006 02:27:12.712286    1615 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 06 02:27:12 ingress-addon-legacy-923493 kubelet[1615]: E1006 02:27:12.712316    1615 pod_workers.go:191] Error syncing pod 44bb57c8-b559-4292-a245-f4d62b7d1e46 ("kube-ingress-dns-minikube_kube-system(44bb57c8-b559-4292-a245-f4d62b7d1e46)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 06 02:27:15 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:15.169463    1615 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-2lq4q" (UniqueName: "kubernetes.io/secret/44bb57c8-b559-4292-a245-f4d62b7d1e46-minikube-ingress-dns-token-2lq4q") pod "44bb57c8-b559-4292-a245-f4d62b7d1e46" (UID: "44bb57c8-b559-4292-a245-f4d62b7d1e46")
	Oct 06 02:27:15 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:15.174099    1615 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44bb57c8-b559-4292-a245-f4d62b7d1e46-minikube-ingress-dns-token-2lq4q" (OuterVolumeSpecName: "minikube-ingress-dns-token-2lq4q") pod "44bb57c8-b559-4292-a245-f4d62b7d1e46" (UID: "44bb57c8-b559-4292-a245-f4d62b7d1e46"). InnerVolumeSpecName "minikube-ingress-dns-token-2lq4q". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 06 02:27:15 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:15.269806    1615 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-2lq4q" (UniqueName: "kubernetes.io/secret/44bb57c8-b559-4292-a245-f4d62b7d1e46-minikube-ingress-dns-token-2lq4q") on node "ingress-addon-legacy-923493" DevicePath ""
	Oct 06 02:27:16 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:16.710977    1615 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 81eba6bc2dfe6f77b5790ea251ba7ff39f82777e9c39817581905aa486885c6e
	Oct 06 02:27:17 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:17.125183    1615 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 81eba6bc2dfe6f77b5790ea251ba7ff39f82777e9c39817581905aa486885c6e
	Oct 06 02:27:17 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:17.125388    1615 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dce1de8e4c211b6d415a23e27e2d7a8eccc2cc7b6c5fa78374aba33fea42611e
	Oct 06 02:27:17 ingress-addon-legacy-923493 kubelet[1615]: E1006 02:27:17.125710    1615 pod_workers.go:191] Error syncing pod 017172e5-fb6e-4964-bbe6-cf5cf04912a7 ("hello-world-app-5f5d8b66bb-vbf7n_default(017172e5-fb6e-4964-bbe6-cf5cf04912a7)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-vbf7n_default(017172e5-fb6e-4964-bbe6-cf5cf04912a7)"
	Oct 06 02:27:18 ingress-addon-legacy-923493 kubelet[1615]: E1006 02:27:18.054291    1615 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-kgvbh.178b63a36477a8e6", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-kgvbh", UID:"8398c8e5-94a6-4ab4-9572-21ba2939a615", APIVersion:"v1", ResourceVersion:"482", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-923493"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13ffa018311ece6, ext:216822164714, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13ffa018311ece6, ext:216822164714, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-kgvbh.178b63a36477a8e6" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 06 02:27:18 ingress-addon-legacy-923493 kubelet[1615]: E1006 02:27:18.068683    1615 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-kgvbh.178b63a36477a8e6", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-kgvbh", UID:"8398c8e5-94a6-4ab4-9572-21ba2939a615", APIVersion:"v1", ResourceVersion:"482", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-923493"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13ffa018311ece6, ext:216822164714, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13ffa01838f2e7e, ext:216830373498, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-kgvbh.178b63a36477a8e6" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 06 02:27:21 ingress-addon-legacy-923493 kubelet[1615]: W1006 02:27:21.134390    1615 pod_container_deletor.go:77] Container "68328b5184abfab61c6e661df3b58f6d53ab626c8e29d85d72357deffac98046" not found in pod's containers
	Oct 06 02:27:22 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:22.189667    1615 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-65867" (UniqueName: "kubernetes.io/secret/8398c8e5-94a6-4ab4-9572-21ba2939a615-ingress-nginx-token-65867") pod "8398c8e5-94a6-4ab4-9572-21ba2939a615" (UID: "8398c8e5-94a6-4ab4-9572-21ba2939a615")
	Oct 06 02:27:22 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:22.189729    1615 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8398c8e5-94a6-4ab4-9572-21ba2939a615-webhook-cert") pod "8398c8e5-94a6-4ab4-9572-21ba2939a615" (UID: "8398c8e5-94a6-4ab4-9572-21ba2939a615")
	Oct 06 02:27:22 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:22.196290    1615 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8398c8e5-94a6-4ab4-9572-21ba2939a615-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "8398c8e5-94a6-4ab4-9572-21ba2939a615" (UID: "8398c8e5-94a6-4ab4-9572-21ba2939a615"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 06 02:27:22 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:22.196474    1615 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8398c8e5-94a6-4ab4-9572-21ba2939a615-ingress-nginx-token-65867" (OuterVolumeSpecName: "ingress-nginx-token-65867") pod "8398c8e5-94a6-4ab4-9572-21ba2939a615" (UID: "8398c8e5-94a6-4ab4-9572-21ba2939a615"). InnerVolumeSpecName "ingress-nginx-token-65867". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 06 02:27:22 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:22.290055    1615 reconciler.go:319] Volume detached for volume "ingress-nginx-token-65867" (UniqueName: "kubernetes.io/secret/8398c8e5-94a6-4ab4-9572-21ba2939a615-ingress-nginx-token-65867") on node "ingress-addon-legacy-923493" DevicePath ""
	Oct 06 02:27:22 ingress-addon-legacy-923493 kubelet[1615]: I1006 02:27:22.290110    1615 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8398c8e5-94a6-4ab4-9572-21ba2939a615-webhook-cert") on node "ingress-addon-legacy-923493" DevicePath ""
	
	* 
	* ==> storage-provisioner [23278ddf964653bcf723bbf943e29d055f5400119f2814f91948cde6c1f6b07f] <==
	* I1006 02:24:11.884443       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 02:24:11.898430       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 02:24:11.898530       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1006 02:24:11.912120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 02:24:11.912499       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-923493_7efc38a4-d21b-4b4e-b8f2-a2075c91108a!
	I1006 02:24:11.913275       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"268fbe22-9f28-4a9e-bc4e-7f664e0698a2", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-923493_7efc38a4-d21b-4b4e-b8f2-a2075c91108a became leader
	I1006 02:24:12.013407       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-923493_7efc38a4-d21b-4b4e-b8f2-a2075c91108a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-923493 -n ingress-addon-legacy-923493
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-923493 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (179.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- exec busybox-5bc68d56bd-qkd4k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- exec busybox-5bc68d56bd-qkd4k -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-951739 -- exec busybox-5bc68d56bd-qkd4k -- sh -c "ping -c 1 192.168.58.1": exit status 1 (264.78221ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-qkd4k): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- exec busybox-5bc68d56bd-z7b7t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- exec busybox-5bc68d56bd-z7b7t -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-951739 -- exec busybox-5bc68d56bd-z7b7t -- sh -c "ping -c 1 192.168.58.1": exit status 1 (248.838062ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-z7b7t): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-951739
helpers_test.go:235: (dbg) docker inspect multinode-951739:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9edc152e73eb4597075d85be5cd5b8d85a9eb4dcbd14cb603ed7c177be913edc",
	        "Created": "2023-10-06T02:34:00.324168607Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2334014,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-06T02:34:00.639947743Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/9edc152e73eb4597075d85be5cd5b8d85a9eb4dcbd14cb603ed7c177be913edc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9edc152e73eb4597075d85be5cd5b8d85a9eb4dcbd14cb603ed7c177be913edc/hostname",
	        "HostsPath": "/var/lib/docker/containers/9edc152e73eb4597075d85be5cd5b8d85a9eb4dcbd14cb603ed7c177be913edc/hosts",
	        "LogPath": "/var/lib/docker/containers/9edc152e73eb4597075d85be5cd5b8d85a9eb4dcbd14cb603ed7c177be913edc/9edc152e73eb4597075d85be5cd5b8d85a9eb4dcbd14cb603ed7c177be913edc-json.log",
	        "Name": "/multinode-951739",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-951739:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-951739",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f9e6b6a1f89c13ef11dbb0ffb1d04ca7a5b4df8bcfbd1abf35007a1a7ebeb0d-init/diff:/var/lib/docker/overlay2/ab4f4fc5e8cd2d4bbf1718e21432b9cb0d953b7279be1c1cbb7bd550f03b46dc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f9e6b6a1f89c13ef11dbb0ffb1d04ca7a5b4df8bcfbd1abf35007a1a7ebeb0d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f9e6b6a1f89c13ef11dbb0ffb1d04ca7a5b4df8bcfbd1abf35007a1a7ebeb0d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f9e6b6a1f89c13ef11dbb0ffb1d04ca7a5b4df8bcfbd1abf35007a1a7ebeb0d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-951739",
	                "Source": "/var/lib/docker/volumes/multinode-951739/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-951739",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-951739",
	                "name.minikube.sigs.k8s.io": "multinode-951739",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e9d0c35b3c7b7bb68af4c1f2974522d04f7869f62e6185817ad42d8e273bba20",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35339"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35338"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35335"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35337"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35336"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e9d0c35b3c7b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-951739": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9edc152e73eb",
	                        "multinode-951739"
	                    ],
	                    "NetworkID": "8cf15a65a1dd2236a922e166a456926c866267e129fd17340f090c724b48446c",
	                    "EndpointID": "a8155e3dd034857e9ce6ae2418b784cf853a3018e8e5355a779f491b549965d0",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-951739 -n multinode-951739
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-951739 logs -n 25: (1.561574786s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-058500                           | mount-start-2-058500 | jenkins | v1.31.2 | 06 Oct 23 02:33 UTC | 06 Oct 23 02:33 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-058500 ssh -- ls                    | mount-start-2-058500 | jenkins | v1.31.2 | 06 Oct 23 02:33 UTC | 06 Oct 23 02:33 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-056662                           | mount-start-1-056662 | jenkins | v1.31.2 | 06 Oct 23 02:33 UTC | 06 Oct 23 02:33 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-058500 ssh -- ls                    | mount-start-2-058500 | jenkins | v1.31.2 | 06 Oct 23 02:33 UTC | 06 Oct 23 02:33 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-058500                           | mount-start-2-058500 | jenkins | v1.31.2 | 06 Oct 23 02:33 UTC | 06 Oct 23 02:33 UTC |
	| start   | -p mount-start-2-058500                           | mount-start-2-058500 | jenkins | v1.31.2 | 06 Oct 23 02:33 UTC | 06 Oct 23 02:33 UTC |
	| ssh     | mount-start-2-058500 ssh -- ls                    | mount-start-2-058500 | jenkins | v1.31.2 | 06 Oct 23 02:33 UTC | 06 Oct 23 02:33 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-058500                           | mount-start-2-058500 | jenkins | v1.31.2 | 06 Oct 23 02:33 UTC | 06 Oct 23 02:33 UTC |
	| delete  | -p mount-start-1-056662                           | mount-start-1-056662 | jenkins | v1.31.2 | 06 Oct 23 02:33 UTC | 06 Oct 23 02:33 UTC |
	| start   | -p multinode-951739                               | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:33 UTC | 06 Oct 23 02:36 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- apply -f                   | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- rollout                    | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- get pods -o                | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- get pods -o                | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- exec                       | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | busybox-5bc68d56bd-qkd4k --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- exec                       | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | busybox-5bc68d56bd-z7b7t --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- exec                       | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | busybox-5bc68d56bd-qkd4k --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- exec                       | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | busybox-5bc68d56bd-z7b7t --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- exec                       | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | busybox-5bc68d56bd-qkd4k -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- exec                       | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | busybox-5bc68d56bd-z7b7t -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- get pods -o                | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- exec                       | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | busybox-5bc68d56bd-qkd4k                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- exec                       | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC |                     |
	|         | busybox-5bc68d56bd-qkd4k -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- exec                       | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC | 06 Oct 23 02:36 UTC |
	|         | busybox-5bc68d56bd-z7b7t                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-951739 -- exec                       | multinode-951739     | jenkins | v1.31.2 | 06 Oct 23 02:36 UTC |                     |
	|         | busybox-5bc68d56bd-z7b7t -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 02:33:54
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 02:33:54.967795 2333562 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:33:54.968042 2333562 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:33:54.968068 2333562 out.go:309] Setting ErrFile to fd 2...
	I1006 02:33:54.968087 2333562 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:33:54.968419 2333562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:33:54.968891 2333562 out.go:303] Setting JSON to false
	I1006 02:33:54.970023 2333562 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":44181,"bootTime":1696515454,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:33:54.970135 2333562 start.go:138] virtualization:  
	I1006 02:33:54.972628 2333562 out.go:177] * [multinode-951739] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1006 02:33:54.974949 2333562 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 02:33:54.975127 2333562 notify.go:220] Checking for updates...
	I1006 02:33:54.977340 2333562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:33:54.979443 2333562 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:33:54.981321 2333562 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:33:54.983153 2333562 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 02:33:54.984940 2333562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 02:33:54.987390 2333562 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:33:55.017944 2333562 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:33:55.018054 2333562 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:33:55.110239 2333562 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-06 02:33:55.099716628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:33:55.110372 2333562 docker.go:295] overlay module found
	I1006 02:33:55.112510 2333562 out.go:177] * Using the docker driver based on user configuration
	I1006 02:33:55.114237 2333562 start.go:298] selected driver: docker
	I1006 02:33:55.114255 2333562 start.go:902] validating driver "docker" against <nil>
	I1006 02:33:55.114275 2333562 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 02:33:55.114941 2333562 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:33:55.178065 2333562 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-06 02:33:55.168426153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:33:55.178214 2333562 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1006 02:33:55.178449 2333562 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 02:33:55.180749 2333562 out.go:177] * Using Docker driver with root privileges
	I1006 02:33:55.182902 2333562 cni.go:84] Creating CNI manager for ""
	I1006 02:33:55.182926 2333562 cni.go:136] 0 nodes found, recommending kindnet
	I1006 02:33:55.182938 2333562 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 02:33:55.182951 2333562 start_flags.go:323] config:
	{Name:multinode-951739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-951739 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:33:55.185510 2333562 out.go:177] * Starting control plane node multinode-951739 in cluster multinode-951739
	I1006 02:33:55.187336 2333562 cache.go:122] Beginning downloading kic base image for docker with crio
	I1006 02:33:55.189619 2333562 out.go:177] * Pulling base image ...
	I1006 02:33:55.191858 2333562 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:33:55.191915 2333562 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1006 02:33:55.191937 2333562 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1006 02:33:55.191942 2333562 cache.go:57] Caching tarball of preloaded images
	I1006 02:33:55.192097 2333562 preload.go:174] Found /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 02:33:55.192107 2333562 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1006 02:33:55.192469 2333562 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/config.json ...
	I1006 02:33:55.192500 2333562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/config.json: {Name:mkdf52069132044c3ddb60cc4bd794010f7ce7a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:33:55.209407 2333562 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1006 02:33:55.209433 2333562 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1006 02:33:55.209460 2333562 cache.go:195] Successfully downloaded all kic artifacts
	I1006 02:33:55.209520 2333562 start.go:365] acquiring machines lock for multinode-951739: {Name:mk88d2c42bbe9ae58598bbad61511871e42f8ebe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:33:55.209635 2333562 start.go:369] acquired machines lock for "multinode-951739" in 91.84µs
	I1006 02:33:55.209666 2333562 start.go:93] Provisioning new machine with config: &{Name:multinode-951739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-951739 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 02:33:55.209750 2333562 start.go:125] createHost starting for "" (driver="docker")
	I1006 02:33:55.212445 2333562 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1006 02:33:55.212727 2333562 start.go:159] libmachine.API.Create for "multinode-951739" (driver="docker")
	I1006 02:33:55.212781 2333562 client.go:168] LocalClient.Create starting
	I1006 02:33:55.212855 2333562 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem
	I1006 02:33:55.212906 2333562 main.go:141] libmachine: Decoding PEM data...
	I1006 02:33:55.212933 2333562 main.go:141] libmachine: Parsing certificate...
	I1006 02:33:55.212989 2333562 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem
	I1006 02:33:55.213012 2333562 main.go:141] libmachine: Decoding PEM data...
	I1006 02:33:55.213026 2333562 main.go:141] libmachine: Parsing certificate...
	I1006 02:33:55.213382 2333562 cli_runner.go:164] Run: docker network inspect multinode-951739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 02:33:55.230576 2333562 cli_runner.go:211] docker network inspect multinode-951739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 02:33:55.230670 2333562 network_create.go:281] running [docker network inspect multinode-951739] to gather additional debugging logs...
	I1006 02:33:55.230689 2333562 cli_runner.go:164] Run: docker network inspect multinode-951739
	W1006 02:33:55.248135 2333562 cli_runner.go:211] docker network inspect multinode-951739 returned with exit code 1
	I1006 02:33:55.248164 2333562 network_create.go:284] error running [docker network inspect multinode-951739]: docker network inspect multinode-951739: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-951739 not found
	I1006 02:33:55.248176 2333562 network_create.go:286] output of [docker network inspect multinode-951739]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-951739 not found
	
	** /stderr **
	I1006 02:33:55.248302 2333562 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:33:55.268774 2333562 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-23fd96ce330f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5d:0d:78:1a} reservation:<nil>}
	I1006 02:33:55.269104 2333562 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400250f700}
	I1006 02:33:55.269127 2333562 network_create.go:124] attempt to create docker network multinode-951739 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1006 02:33:55.269196 2333562 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-951739 multinode-951739
	I1006 02:33:55.343089 2333562 network_create.go:108] docker network multinode-951739 192.168.58.0/24 created
	I1006 02:33:55.343120 2333562 kic.go:118] calculated static IP "192.168.58.2" for the "multinode-951739" container
	I1006 02:33:55.343203 2333562 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 02:33:55.359277 2333562 cli_runner.go:164] Run: docker volume create multinode-951739 --label name.minikube.sigs.k8s.io=multinode-951739 --label created_by.minikube.sigs.k8s.io=true
	I1006 02:33:55.377431 2333562 oci.go:103] Successfully created a docker volume multinode-951739
	I1006 02:33:55.377517 2333562 cli_runner.go:164] Run: docker run --rm --name multinode-951739-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-951739 --entrypoint /usr/bin/test -v multinode-951739:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1006 02:33:55.932629 2333562 oci.go:107] Successfully prepared a docker volume multinode-951739
	I1006 02:33:55.932684 2333562 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:33:55.932706 2333562 kic.go:191] Starting extracting preloaded images to volume ...
	I1006 02:33:55.932794 2333562 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-951739:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 02:34:00.233468 2333562 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-951739:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.300627089s)
	I1006 02:34:00.233534 2333562 kic.go:200] duration metric: took 4.300823 seconds to extract preloaded images to volume
	W1006 02:34:00.233690 2333562 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 02:34:00.233830 2333562 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 02:34:00.306973 2333562 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-951739 --name multinode-951739 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-951739 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-951739 --network multinode-951739 --ip 192.168.58.2 --volume multinode-951739:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1006 02:34:00.649150 2333562 cli_runner.go:164] Run: docker container inspect multinode-951739 --format={{.State.Running}}
	I1006 02:34:00.677208 2333562 cli_runner.go:164] Run: docker container inspect multinode-951739 --format={{.State.Status}}
	I1006 02:34:00.704222 2333562 cli_runner.go:164] Run: docker exec multinode-951739 stat /var/lib/dpkg/alternatives/iptables
	I1006 02:34:00.787748 2333562 oci.go:144] the created container "multinode-951739" has a running status.
	I1006 02:34:00.787778 2333562 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739/id_rsa...
	I1006 02:34:01.106383 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 02:34:01.106439 2333562 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 02:34:01.144957 2333562 cli_runner.go:164] Run: docker container inspect multinode-951739 --format={{.State.Status}}
	I1006 02:34:01.172074 2333562 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 02:34:01.172092 2333562 kic_runner.go:114] Args: [docker exec --privileged multinode-951739 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 02:34:01.246689 2333562 cli_runner.go:164] Run: docker container inspect multinode-951739 --format={{.State.Status}}
	I1006 02:34:01.278621 2333562 machine.go:88] provisioning docker machine ...
	I1006 02:34:01.278654 2333562 ubuntu.go:169] provisioning hostname "multinode-951739"
	I1006 02:34:01.278721 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739
	I1006 02:34:01.306144 2333562 main.go:141] libmachine: Using SSH client type: native
	I1006 02:34:01.306628 2333562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35339 <nil> <nil>}
	I1006 02:34:01.306647 2333562 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-951739 && echo "multinode-951739" | sudo tee /etc/hostname
	I1006 02:34:01.307252 2333562 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40480->127.0.0.1:35339: read: connection reset by peer
	I1006 02:34:04.461778 2333562 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-951739
	
	I1006 02:34:04.461873 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739
	I1006 02:34:04.479798 2333562 main.go:141] libmachine: Using SSH client type: native
	I1006 02:34:04.480209 2333562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35339 <nil> <nil>}
	I1006 02:34:04.480234 2333562 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-951739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-951739/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-951739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 02:34:04.612909 2333562 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 02:34:04.612949 2333562 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17314-2262959/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-2262959/.minikube}
	I1006 02:34:04.612984 2333562 ubuntu.go:177] setting up certificates
	I1006 02:34:04.613000 2333562 provision.go:83] configureAuth start
	I1006 02:34:04.613075 2333562 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951739
	I1006 02:34:04.632749 2333562 provision.go:138] copyHostCerts
	I1006 02:34:04.632788 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:34:04.632817 2333562 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem, removing ...
	I1006 02:34:04.632823 2333562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:34:04.632904 2333562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem (1082 bytes)
	I1006 02:34:04.632985 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:34:04.633001 2333562 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem, removing ...
	I1006 02:34:04.633006 2333562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:34:04.633032 2333562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem (1123 bytes)
	I1006 02:34:04.633068 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:34:04.633083 2333562 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem, removing ...
	I1006 02:34:04.633086 2333562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:34:04.633109 2333562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem (1675 bytes)
	I1006 02:34:04.633153 2333562 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem org=jenkins.multinode-951739 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-951739]
	I1006 02:34:04.877349 2333562 provision.go:172] copyRemoteCerts
	I1006 02:34:04.877417 2333562 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 02:34:04.877463 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739
	I1006 02:34:04.899007 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35339 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739/id_rsa Username:docker}
	I1006 02:34:05.001183 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 02:34:05.001254 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1006 02:34:05.031727 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 02:34:05.031796 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 02:34:05.061705 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 02:34:05.061779 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 02:34:05.091021 2333562 provision.go:86] duration metric: configureAuth took 478.005799ms
	I1006 02:34:05.091133 2333562 ubuntu.go:193] setting minikube options for container-runtime
	I1006 02:34:05.091339 2333562 config.go:182] Loaded profile config "multinode-951739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:34:05.091458 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739
	I1006 02:34:05.109801 2333562 main.go:141] libmachine: Using SSH client type: native
	I1006 02:34:05.110236 2333562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35339 <nil> <nil>}
	I1006 02:34:05.110257 2333562 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 02:34:05.364772 2333562 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 02:34:05.364836 2333562 machine.go:91] provisioned docker machine in 4.086196024s
	I1006 02:34:05.364859 2333562 client.go:171] LocalClient.Create took 10.15206756s
	I1006 02:34:05.364887 2333562 start.go:167] duration metric: libmachine.API.Create for "multinode-951739" took 10.152162402s
	I1006 02:34:05.364923 2333562 start.go:300] post-start starting for "multinode-951739" (driver="docker")
	I1006 02:34:05.364951 2333562 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 02:34:05.365055 2333562 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 02:34:05.365130 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739
	I1006 02:34:05.385851 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35339 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739/id_rsa Username:docker}
	I1006 02:34:05.481985 2333562 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 02:34:05.486031 2333562 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1006 02:34:05.486092 2333562 command_runner.go:130] > NAME="Ubuntu"
	I1006 02:34:05.486115 2333562 command_runner.go:130] > VERSION_ID="22.04"
	I1006 02:34:05.486138 2333562 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1006 02:34:05.486152 2333562 command_runner.go:130] > VERSION_CODENAME=jammy
	I1006 02:34:05.486173 2333562 command_runner.go:130] > ID=ubuntu
	I1006 02:34:05.486178 2333562 command_runner.go:130] > ID_LIKE=debian
	I1006 02:34:05.486185 2333562 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1006 02:34:05.486192 2333562 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1006 02:34:05.486200 2333562 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1006 02:34:05.486209 2333562 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1006 02:34:05.486219 2333562 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1006 02:34:05.486267 2333562 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 02:34:05.486295 2333562 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1006 02:34:05.486308 2333562 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1006 02:34:05.486318 2333562 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1006 02:34:05.486329 2333562 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/addons for local assets ...
	I1006 02:34:05.486390 2333562 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/files for local assets ...
	I1006 02:34:05.486471 2333562 filesync.go:149] local asset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> 22683062.pem in /etc/ssl/certs
	I1006 02:34:05.486481 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> /etc/ssl/certs/22683062.pem
	I1006 02:34:05.486581 2333562 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 02:34:05.496990 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:34:05.525912 2333562 start.go:303] post-start completed in 160.956875ms
	I1006 02:34:05.526292 2333562 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951739
	I1006 02:34:05.544222 2333562 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/config.json ...
	I1006 02:34:05.544511 2333562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:34:05.544567 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739
	I1006 02:34:05.561998 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35339 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739/id_rsa Username:docker}
	I1006 02:34:05.653012 2333562 command_runner.go:130] > 11%!
	(MISSING)I1006 02:34:05.653523 2333562 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 02:34:05.659489 2333562 command_runner.go:130] > 174G
	I1006 02:34:05.659515 2333562 start.go:128] duration metric: createHost completed in 10.449752424s
	I1006 02:34:05.659526 2333562 start.go:83] releasing machines lock for "multinode-951739", held for 10.449877371s
	I1006 02:34:05.659607 2333562 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951739
	I1006 02:34:05.677583 2333562 ssh_runner.go:195] Run: cat /version.json
	I1006 02:34:05.677648 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739
	I1006 02:34:05.677594 2333562 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 02:34:05.677789 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739
	I1006 02:34:05.698853 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35339 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739/id_rsa Username:docker}
	I1006 02:34:05.699298 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35339 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739/id_rsa Username:docker}
	I1006 02:34:05.787327 2333562 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1696360059-17345", "minikube_version": "v1.31.2", "commit": "3da829742e24bcb762d99c062a7806436d0f28e3"}
	I1006 02:34:05.787466 2333562 ssh_runner.go:195] Run: systemctl --version
	I1006 02:34:05.926381 2333562 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1006 02:34:05.929587 2333562 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1006 02:34:05.929668 2333562 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1006 02:34:05.929762 2333562 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 02:34:06.077338 2333562 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 02:34:06.083213 2333562 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1006 02:34:06.083289 2333562 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1006 02:34:06.083319 2333562 command_runner.go:130] > Device: 38h/56d	Inode: 1823254     Links: 1
	I1006 02:34:06.083329 2333562 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 02:34:06.083351 2333562 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1006 02:34:06.083364 2333562 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1006 02:34:06.083372 2333562 command_runner.go:130] > Change: 2023-10-06 02:11:31.928487767 +0000
	I1006 02:34:06.083378 2333562 command_runner.go:130] >  Birth: 2023-10-06 02:11:31.928487767 +0000
	I1006 02:34:06.083656 2333562 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:34:06.109858 2333562 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1006 02:34:06.109984 2333562 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:34:06.145746 2333562 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1006 02:34:06.145829 2333562 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1006 02:34:06.145852 2333562 start.go:472] detecting cgroup driver to use...
	I1006 02:34:06.145907 2333562 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1006 02:34:06.145972 2333562 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 02:34:06.165518 2333562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 02:34:06.179394 2333562 docker.go:198] disabling cri-docker service (if available) ...
	I1006 02:34:06.179458 2333562 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 02:34:06.195860 2333562 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 02:34:06.214511 2333562 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 02:34:06.307034 2333562 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 02:34:06.323785 2333562 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1006 02:34:06.417649 2333562 docker.go:214] disabling docker service ...
	I1006 02:34:06.417713 2333562 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 02:34:06.439802 2333562 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 02:34:06.454002 2333562 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 02:34:06.562098 2333562 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1006 02:34:06.562253 2333562 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 02:34:06.668862 2333562 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1006 02:34:06.668935 2333562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 02:34:06.683193 2333562 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 02:34:06.702459 2333562 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1006 02:34:06.703831 2333562 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1006 02:34:06.703933 2333562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:34:06.716812 2333562 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 02:34:06.716926 2333562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:34:06.729364 2333562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:34:06.741525 2333562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:34:06.753384 2333562 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 02:34:06.764726 2333562 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 02:34:06.774206 2333562 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1006 02:34:06.775178 2333562 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 02:34:06.785260 2333562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 02:34:06.881310 2333562 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 02:34:07.008782 2333562 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 02:34:07.008851 2333562 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 02:34:07.013948 2333562 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1006 02:34:07.013970 2333562 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1006 02:34:07.013978 2333562 command_runner.go:130] > Device: 44h/68d	Inode: 190         Links: 1
	I1006 02:34:07.013986 2333562 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 02:34:07.013992 2333562 command_runner.go:130] > Access: 2023-10-06 02:34:06.994192167 +0000
	I1006 02:34:07.014004 2333562 command_runner.go:130] > Modify: 2023-10-06 02:34:06.994192167 +0000
	I1006 02:34:07.014010 2333562 command_runner.go:130] > Change: 2023-10-06 02:34:06.994192167 +0000
	I1006 02:34:07.014015 2333562 command_runner.go:130] >  Birth: -
	I1006 02:34:07.014030 2333562 start.go:540] Will wait 60s for crictl version
	I1006 02:34:07.014079 2333562 ssh_runner.go:195] Run: which crictl
	I1006 02:34:07.018365 2333562 command_runner.go:130] > /usr/bin/crictl
	I1006 02:34:07.018641 2333562 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 02:34:07.063729 2333562 command_runner.go:130] > Version:  0.1.0
	I1006 02:34:07.063790 2333562 command_runner.go:130] > RuntimeName:  cri-o
	I1006 02:34:07.063812 2333562 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1006 02:34:07.063836 2333562 command_runner.go:130] > RuntimeApiVersion:  v1
	I1006 02:34:07.066391 2333562 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1006 02:34:07.066528 2333562 ssh_runner.go:195] Run: crio --version
	I1006 02:34:07.111156 2333562 command_runner.go:130] > crio version 1.24.6
	I1006 02:34:07.111228 2333562 command_runner.go:130] > Version:          1.24.6
	I1006 02:34:07.111251 2333562 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1006 02:34:07.111272 2333562 command_runner.go:130] > GitTreeState:     clean
	I1006 02:34:07.111302 2333562 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1006 02:34:07.111326 2333562 command_runner.go:130] > GoVersion:        go1.18.2
	I1006 02:34:07.111347 2333562 command_runner.go:130] > Compiler:         gc
	I1006 02:34:07.111369 2333562 command_runner.go:130] > Platform:         linux/arm64
	I1006 02:34:07.111402 2333562 command_runner.go:130] > Linkmode:         dynamic
	I1006 02:34:07.111432 2333562 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1006 02:34:07.111452 2333562 command_runner.go:130] > SeccompEnabled:   true
	I1006 02:34:07.111472 2333562 command_runner.go:130] > AppArmorEnabled:  false
	I1006 02:34:07.113376 2333562 ssh_runner.go:195] Run: crio --version
	I1006 02:34:07.156379 2333562 command_runner.go:130] > crio version 1.24.6
	I1006 02:34:07.156441 2333562 command_runner.go:130] > Version:          1.24.6
	I1006 02:34:07.156475 2333562 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1006 02:34:07.156497 2333562 command_runner.go:130] > GitTreeState:     clean
	I1006 02:34:07.156529 2333562 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1006 02:34:07.156555 2333562 command_runner.go:130] > GoVersion:        go1.18.2
	I1006 02:34:07.156581 2333562 command_runner.go:130] > Compiler:         gc
	I1006 02:34:07.156601 2333562 command_runner.go:130] > Platform:         linux/arm64
	I1006 02:34:07.156634 2333562 command_runner.go:130] > Linkmode:         dynamic
	I1006 02:34:07.156661 2333562 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1006 02:34:07.156679 2333562 command_runner.go:130] > SeccompEnabled:   true
	I1006 02:34:07.156698 2333562 command_runner.go:130] > AppArmorEnabled:  false
	I1006 02:34:07.162022 2333562 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1006 02:34:07.163886 2333562 cli_runner.go:164] Run: docker network inspect multinode-951739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:34:07.181060 2333562 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1006 02:34:07.185717 2333562 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 02:34:07.199207 2333562 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:34:07.199284 2333562 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 02:34:07.261590 2333562 command_runner.go:130] > {
	I1006 02:34:07.261608 2333562 command_runner.go:130] >   "images": [
	I1006 02:34:07.261613 2333562 command_runner.go:130] >     {
	I1006 02:34:07.261623 2333562 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1006 02:34:07.261629 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.261636 2333562 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1006 02:34:07.261641 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.261650 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.261661 2333562 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1006 02:34:07.261677 2333562 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1006 02:34:07.261682 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.261688 2333562 command_runner.go:130] >       "size": "60867618",
	I1006 02:34:07.261695 2333562 command_runner.go:130] >       "uid": null,
	I1006 02:34:07.261701 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.261711 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.261729 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.261738 2333562 command_runner.go:130] >     },
	I1006 02:34:07.261743 2333562 command_runner.go:130] >     {
	I1006 02:34:07.261754 2333562 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1006 02:34:07.261762 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.261769 2333562 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 02:34:07.261773 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.261778 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.261788 2333562 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1006 02:34:07.261801 2333562 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1006 02:34:07.261808 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.261823 2333562 command_runner.go:130] >       "size": "29037500",
	I1006 02:34:07.261831 2333562 command_runner.go:130] >       "uid": null,
	I1006 02:34:07.261836 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.261846 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.261851 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.261855 2333562 command_runner.go:130] >     },
	I1006 02:34:07.261859 2333562 command_runner.go:130] >     {
	I1006 02:34:07.261871 2333562 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1006 02:34:07.261879 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.261886 2333562 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1006 02:34:07.261894 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.261899 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.261912 2333562 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1006 02:34:07.261924 2333562 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1006 02:34:07.261932 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.261937 2333562 command_runner.go:130] >       "size": "51393451",
	I1006 02:34:07.261942 2333562 command_runner.go:130] >       "uid": null,
	I1006 02:34:07.261948 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.261957 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.261962 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.261970 2333562 command_runner.go:130] >     },
	I1006 02:34:07.261975 2333562 command_runner.go:130] >     {
	I1006 02:34:07.261985 2333562 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1006 02:34:07.261993 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.262000 2333562 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1006 02:34:07.262010 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.262015 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.262025 2333562 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1006 02:34:07.262037 2333562 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1006 02:34:07.262050 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.262059 2333562 command_runner.go:130] >       "size": "182203183",
	I1006 02:34:07.262064 2333562 command_runner.go:130] >       "uid": {
	I1006 02:34:07.262072 2333562 command_runner.go:130] >         "value": "0"
	I1006 02:34:07.262077 2333562 command_runner.go:130] >       },
	I1006 02:34:07.262085 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.262090 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.262095 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.262100 2333562 command_runner.go:130] >     },
	I1006 02:34:07.262106 2333562 command_runner.go:130] >     {
	I1006 02:34:07.262114 2333562 command_runner.go:130] >       "id": "30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c",
	I1006 02:34:07.262122 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.262129 2333562 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1006 02:34:07.262137 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.262144 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.262157 2333562 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d",
	I1006 02:34:07.262169 2333562 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1006 02:34:07.262174 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.262179 2333562 command_runner.go:130] >       "size": "121054158",
	I1006 02:34:07.262186 2333562 command_runner.go:130] >       "uid": {
	I1006 02:34:07.262195 2333562 command_runner.go:130] >         "value": "0"
	I1006 02:34:07.262199 2333562 command_runner.go:130] >       },
	I1006 02:34:07.262208 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.262213 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.262221 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.262225 2333562 command_runner.go:130] >     },
	I1006 02:34:07.262234 2333562 command_runner.go:130] >     {
	I1006 02:34:07.262242 2333562 command_runner.go:130] >       "id": "89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c",
	I1006 02:34:07.262250 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.262257 2333562 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1006 02:34:07.262264 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.262270 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.262285 2333562 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8",
	I1006 02:34:07.262298 2333562 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"
	I1006 02:34:07.262306 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.262311 2333562 command_runner.go:130] >       "size": "117187380",
	I1006 02:34:07.262319 2333562 command_runner.go:130] >       "uid": {
	I1006 02:34:07.262324 2333562 command_runner.go:130] >         "value": "0"
	I1006 02:34:07.262331 2333562 command_runner.go:130] >       },
	I1006 02:34:07.262338 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.262343 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.262348 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.262356 2333562 command_runner.go:130] >     },
	I1006 02:34:07.262361 2333562 command_runner.go:130] >     {
	I1006 02:34:07.262371 2333562 command_runner.go:130] >       "id": "7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa",
	I1006 02:34:07.262380 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.262386 2333562 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1006 02:34:07.262393 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.262399 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.262411 2333562 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf",
	I1006 02:34:07.262422 2333562 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"
	I1006 02:34:07.262431 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.262436 2333562 command_runner.go:130] >       "size": "69926807",
	I1006 02:34:07.262444 2333562 command_runner.go:130] >       "uid": null,
	I1006 02:34:07.262450 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.262458 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.262464 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.262471 2333562 command_runner.go:130] >     },
	I1006 02:34:07.262476 2333562 command_runner.go:130] >     {
	I1006 02:34:07.262487 2333562 command_runner.go:130] >       "id": "64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7",
	I1006 02:34:07.262492 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.262500 2333562 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1006 02:34:07.262505 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.262512 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.262556 2333562 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1006 02:34:07.262571 2333562 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"
	I1006 02:34:07.262578 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.262583 2333562 command_runner.go:130] >       "size": "59188020",
	I1006 02:34:07.262591 2333562 command_runner.go:130] >       "uid": {
	I1006 02:34:07.262600 2333562 command_runner.go:130] >         "value": "0"
	I1006 02:34:07.262605 2333562 command_runner.go:130] >       },
	I1006 02:34:07.262613 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.262618 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.262626 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.262631 2333562 command_runner.go:130] >     },
	I1006 02:34:07.262638 2333562 command_runner.go:130] >     {
	I1006 02:34:07.262646 2333562 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1006 02:34:07.262655 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.262661 2333562 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1006 02:34:07.262665 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.262671 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.262684 2333562 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1006 02:34:07.262695 2333562 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1006 02:34:07.262703 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.262709 2333562 command_runner.go:130] >       "size": "520014",
	I1006 02:34:07.262717 2333562 command_runner.go:130] >       "uid": {
	I1006 02:34:07.262724 2333562 command_runner.go:130] >         "value": "65535"
	I1006 02:34:07.262731 2333562 command_runner.go:130] >       },
	I1006 02:34:07.262736 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.262741 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.262746 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.262752 2333562 command_runner.go:130] >     }
	I1006 02:34:07.262757 2333562 command_runner.go:130] >   ]
	I1006 02:34:07.262764 2333562 command_runner.go:130] > }
	I1006 02:34:07.265103 2333562 crio.go:496] all images are preloaded for cri-o runtime.
	I1006 02:34:07.265124 2333562 crio.go:415] Images already preloaded, skipping extraction
	I1006 02:34:07.265182 2333562 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 02:34:07.301817 2333562 command_runner.go:130] > {
	I1006 02:34:07.301837 2333562 command_runner.go:130] >   "images": [
	I1006 02:34:07.301842 2333562 command_runner.go:130] >     {
	I1006 02:34:07.301863 2333562 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1006 02:34:07.301870 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.301882 2333562 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1006 02:34:07.301887 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.301893 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.301904 2333562 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1006 02:34:07.301916 2333562 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1006 02:34:07.301938 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.301949 2333562 command_runner.go:130] >       "size": "60867618",
	I1006 02:34:07.301955 2333562 command_runner.go:130] >       "uid": null,
	I1006 02:34:07.301965 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.301973 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.301981 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.301986 2333562 command_runner.go:130] >     },
	I1006 02:34:07.301990 2333562 command_runner.go:130] >     {
	I1006 02:34:07.302008 2333562 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1006 02:34:07.302017 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.302024 2333562 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 02:34:07.302028 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302033 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.302043 2333562 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1006 02:34:07.302053 2333562 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1006 02:34:07.302060 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302068 2333562 command_runner.go:130] >       "size": "29037500",
	I1006 02:34:07.302073 2333562 command_runner.go:130] >       "uid": null,
	I1006 02:34:07.302093 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.302099 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.302107 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.302111 2333562 command_runner.go:130] >     },
	I1006 02:34:07.302119 2333562 command_runner.go:130] >     {
	I1006 02:34:07.302129 2333562 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1006 02:34:07.302137 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.302144 2333562 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1006 02:34:07.302148 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302154 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.302172 2333562 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1006 02:34:07.302187 2333562 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1006 02:34:07.302192 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302197 2333562 command_runner.go:130] >       "size": "51393451",
	I1006 02:34:07.302207 2333562 command_runner.go:130] >       "uid": null,
	I1006 02:34:07.302212 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.302217 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.302227 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.302240 2333562 command_runner.go:130] >     },
	I1006 02:34:07.302250 2333562 command_runner.go:130] >     {
	I1006 02:34:07.302258 2333562 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1006 02:34:07.302263 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.302272 2333562 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1006 02:34:07.302276 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302281 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.302290 2333562 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1006 02:34:07.302301 2333562 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1006 02:34:07.302320 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302330 2333562 command_runner.go:130] >       "size": "182203183",
	I1006 02:34:07.302335 2333562 command_runner.go:130] >       "uid": {
	I1006 02:34:07.302344 2333562 command_runner.go:130] >         "value": "0"
	I1006 02:34:07.302349 2333562 command_runner.go:130] >       },
	I1006 02:34:07.302354 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.302361 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.302366 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.302370 2333562 command_runner.go:130] >     },
	I1006 02:34:07.302376 2333562 command_runner.go:130] >     {
	I1006 02:34:07.302392 2333562 command_runner.go:130] >       "id": "30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c",
	I1006 02:34:07.302400 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.302407 2333562 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1006 02:34:07.302417 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302422 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.302431 2333562 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d",
	I1006 02:34:07.302447 2333562 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1006 02:34:07.302452 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302463 2333562 command_runner.go:130] >       "size": "121054158",
	I1006 02:34:07.302472 2333562 command_runner.go:130] >       "uid": {
	I1006 02:34:07.302477 2333562 command_runner.go:130] >         "value": "0"
	I1006 02:34:07.302482 2333562 command_runner.go:130] >       },
	I1006 02:34:07.302492 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.302497 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.302502 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.302508 2333562 command_runner.go:130] >     },
	I1006 02:34:07.302513 2333562 command_runner.go:130] >     {
	I1006 02:34:07.302523 2333562 command_runner.go:130] >       "id": "89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c",
	I1006 02:34:07.302532 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.302547 2333562 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1006 02:34:07.302554 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302559 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.302568 2333562 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8",
	I1006 02:34:07.302584 2333562 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"
	I1006 02:34:07.302588 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302600 2333562 command_runner.go:130] >       "size": "117187380",
	I1006 02:34:07.302605 2333562 command_runner.go:130] >       "uid": {
	I1006 02:34:07.302619 2333562 command_runner.go:130] >         "value": "0"
	I1006 02:34:07.302624 2333562 command_runner.go:130] >       },
	I1006 02:34:07.302631 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.302636 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.302641 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.302646 2333562 command_runner.go:130] >     },
	I1006 02:34:07.302656 2333562 command_runner.go:130] >     {
	I1006 02:34:07.302663 2333562 command_runner.go:130] >       "id": "7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa",
	I1006 02:34:07.302674 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.302683 2333562 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1006 02:34:07.302693 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302701 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.302711 2333562 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf",
	I1006 02:34:07.302723 2333562 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"
	I1006 02:34:07.302728 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302735 2333562 command_runner.go:130] >       "size": "69926807",
	I1006 02:34:07.302740 2333562 command_runner.go:130] >       "uid": null,
	I1006 02:34:07.302745 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.302755 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.302768 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.302776 2333562 command_runner.go:130] >     },
	I1006 02:34:07.302780 2333562 command_runner.go:130] >     {
	I1006 02:34:07.302790 2333562 command_runner.go:130] >       "id": "64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7",
	I1006 02:34:07.302808 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.302817 2333562 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1006 02:34:07.302822 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302829 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.302890 2333562 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1006 02:34:07.302904 2333562 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"
	I1006 02:34:07.302917 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.302924 2333562 command_runner.go:130] >       "size": "59188020",
	I1006 02:34:07.302929 2333562 command_runner.go:130] >       "uid": {
	I1006 02:34:07.302934 2333562 command_runner.go:130] >         "value": "0"
	I1006 02:34:07.302938 2333562 command_runner.go:130] >       },
	I1006 02:34:07.302943 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.302948 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.302953 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.302957 2333562 command_runner.go:130] >     },
	I1006 02:34:07.302961 2333562 command_runner.go:130] >     {
	I1006 02:34:07.302969 2333562 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1006 02:34:07.302975 2333562 command_runner.go:130] >       "repoTags": [
	I1006 02:34:07.302983 2333562 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1006 02:34:07.302997 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.303003 2333562 command_runner.go:130] >       "repoDigests": [
	I1006 02:34:07.303015 2333562 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1006 02:34:07.303028 2333562 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1006 02:34:07.303033 2333562 command_runner.go:130] >       ],
	I1006 02:34:07.303038 2333562 command_runner.go:130] >       "size": "520014",
	I1006 02:34:07.303061 2333562 command_runner.go:130] >       "uid": {
	I1006 02:34:07.303069 2333562 command_runner.go:130] >         "value": "65535"
	I1006 02:34:07.303074 2333562 command_runner.go:130] >       },
	I1006 02:34:07.303081 2333562 command_runner.go:130] >       "username": "",
	I1006 02:34:07.303089 2333562 command_runner.go:130] >       "spec": null,
	I1006 02:34:07.303095 2333562 command_runner.go:130] >       "pinned": false
	I1006 02:34:07.303103 2333562 command_runner.go:130] >     }
	I1006 02:34:07.303108 2333562 command_runner.go:130] >   ]
	I1006 02:34:07.303112 2333562 command_runner.go:130] > }
	I1006 02:34:07.305755 2333562 crio.go:496] all images are preloaded for cri-o runtime.
	I1006 02:34:07.305774 2333562 cache_images.go:84] Images are preloaded, skipping loading
	I1006 02:34:07.305859 2333562 ssh_runner.go:195] Run: crio config
	I1006 02:34:07.354169 2333562 command_runner.go:130] ! time="2023-10-06 02:34:07.353811335Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1006 02:34:07.354467 2333562 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1006 02:34:07.361448 2333562 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1006 02:34:07.361468 2333562 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1006 02:34:07.361477 2333562 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1006 02:34:07.361484 2333562 command_runner.go:130] > #
	I1006 02:34:07.361495 2333562 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1006 02:34:07.361503 2333562 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1006 02:34:07.361511 2333562 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1006 02:34:07.361519 2333562 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1006 02:34:07.361524 2333562 command_runner.go:130] > # reload'.
	I1006 02:34:07.361532 2333562 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1006 02:34:07.361539 2333562 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1006 02:34:07.361547 2333562 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1006 02:34:07.361554 2333562 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1006 02:34:07.361559 2333562 command_runner.go:130] > [crio]
	I1006 02:34:07.361567 2333562 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1006 02:34:07.361573 2333562 command_runner.go:130] > # containers images, in this directory.
	I1006 02:34:07.361584 2333562 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1006 02:34:07.361594 2333562 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1006 02:34:07.361600 2333562 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1006 02:34:07.361608 2333562 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1006 02:34:07.361615 2333562 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1006 02:34:07.361622 2333562 command_runner.go:130] > # storage_driver = "vfs"
	I1006 02:34:07.361629 2333562 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1006 02:34:07.361636 2333562 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1006 02:34:07.361641 2333562 command_runner.go:130] > # storage_option = [
	I1006 02:34:07.361645 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.361653 2333562 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1006 02:34:07.361660 2333562 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1006 02:34:07.361666 2333562 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1006 02:34:07.361672 2333562 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1006 02:34:07.361680 2333562 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1006 02:34:07.361685 2333562 command_runner.go:130] > # always happen on a node reboot
	I1006 02:34:07.361691 2333562 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1006 02:34:07.361698 2333562 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1006 02:34:07.361705 2333562 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1006 02:34:07.361717 2333562 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1006 02:34:07.361726 2333562 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1006 02:34:07.361737 2333562 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1006 02:34:07.361746 2333562 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1006 02:34:07.361751 2333562 command_runner.go:130] > # internal_wipe = true
	I1006 02:34:07.361759 2333562 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1006 02:34:07.361767 2333562 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1006 02:34:07.361773 2333562 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1006 02:34:07.361780 2333562 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1006 02:34:07.361787 2333562 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1006 02:34:07.361791 2333562 command_runner.go:130] > [crio.api]
	I1006 02:34:07.361798 2333562 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1006 02:34:07.361803 2333562 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1006 02:34:07.361810 2333562 command_runner.go:130] > # IP address on which the stream server will listen.
	I1006 02:34:07.361815 2333562 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1006 02:34:07.361823 2333562 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1006 02:34:07.361829 2333562 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1006 02:34:07.361834 2333562 command_runner.go:130] > # stream_port = "0"
	I1006 02:34:07.361841 2333562 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1006 02:34:07.361847 2333562 command_runner.go:130] > # stream_enable_tls = false
	I1006 02:34:07.361854 2333562 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1006 02:34:07.361859 2333562 command_runner.go:130] > # stream_idle_timeout = ""
	I1006 02:34:07.361867 2333562 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1006 02:34:07.361876 2333562 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1006 02:34:07.361880 2333562 command_runner.go:130] > # minutes.
	I1006 02:34:07.361885 2333562 command_runner.go:130] > # stream_tls_cert = ""
	I1006 02:34:07.361892 2333562 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1006 02:34:07.361900 2333562 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1006 02:34:07.361905 2333562 command_runner.go:130] > # stream_tls_key = ""
	I1006 02:34:07.361911 2333562 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1006 02:34:07.361922 2333562 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1006 02:34:07.361928 2333562 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1006 02:34:07.361933 2333562 command_runner.go:130] > # stream_tls_ca = ""
	I1006 02:34:07.361942 2333562 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1006 02:34:07.361947 2333562 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1006 02:34:07.361956 2333562 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1006 02:34:07.361963 2333562 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1006 02:34:07.361997 2333562 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1006 02:34:07.362006 2333562 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1006 02:34:07.362010 2333562 command_runner.go:130] > [crio.runtime]
	I1006 02:34:07.362017 2333562 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1006 02:34:07.362023 2333562 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1006 02:34:07.362028 2333562 command_runner.go:130] > # "nofile=1024:2048"
	I1006 02:34:07.362037 2333562 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1006 02:34:07.362042 2333562 command_runner.go:130] > # default_ulimits = [
	I1006 02:34:07.362046 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.362053 2333562 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1006 02:34:07.362058 2333562 command_runner.go:130] > # no_pivot = false
	I1006 02:34:07.362064 2333562 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1006 02:34:07.362072 2333562 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1006 02:34:07.362078 2333562 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1006 02:34:07.362085 2333562 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1006 02:34:07.362090 2333562 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1006 02:34:07.362098 2333562 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 02:34:07.362104 2333562 command_runner.go:130] > # conmon = ""
	I1006 02:34:07.362109 2333562 command_runner.go:130] > # Cgroup setting for conmon
	I1006 02:34:07.362118 2333562 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1006 02:34:07.362122 2333562 command_runner.go:130] > conmon_cgroup = "pod"
	I1006 02:34:07.362130 2333562 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1006 02:34:07.362136 2333562 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1006 02:34:07.362144 2333562 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 02:34:07.362149 2333562 command_runner.go:130] > # conmon_env = [
	I1006 02:34:07.362152 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.362159 2333562 command_runner.go:130] > # Additional environment variables to set for all the
	I1006 02:34:07.362165 2333562 command_runner.go:130] > # containers. These are overridden if set in the
	I1006 02:34:07.362172 2333562 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1006 02:34:07.362176 2333562 command_runner.go:130] > # default_env = [
	I1006 02:34:07.362180 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.362187 2333562 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1006 02:34:07.362191 2333562 command_runner.go:130] > # selinux = false
	I1006 02:34:07.362199 2333562 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1006 02:34:07.362206 2333562 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1006 02:34:07.362216 2333562 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1006 02:34:07.362222 2333562 command_runner.go:130] > # seccomp_profile = ""
	I1006 02:34:07.362228 2333562 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1006 02:34:07.362235 2333562 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1006 02:34:07.362243 2333562 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1006 02:34:07.362249 2333562 command_runner.go:130] > # which might increase security.
	I1006 02:34:07.362254 2333562 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1006 02:34:07.362262 2333562 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1006 02:34:07.362269 2333562 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1006 02:34:07.362277 2333562 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1006 02:34:07.362285 2333562 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1006 02:34:07.362292 2333562 command_runner.go:130] > # This option supports live configuration reload.
	I1006 02:34:07.362298 2333562 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1006 02:34:07.362305 2333562 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1006 02:34:07.362310 2333562 command_runner.go:130] > # the cgroup blockio controller.
	I1006 02:34:07.362315 2333562 command_runner.go:130] > # blockio_config_file = ""
	I1006 02:34:07.362323 2333562 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1006 02:34:07.362328 2333562 command_runner.go:130] > # irqbalance daemon.
	I1006 02:34:07.362335 2333562 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1006 02:34:07.362343 2333562 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1006 02:34:07.362349 2333562 command_runner.go:130] > # This option supports live configuration reload.
	I1006 02:34:07.362354 2333562 command_runner.go:130] > # rdt_config_file = ""
	I1006 02:34:07.362360 2333562 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1006 02:34:07.362365 2333562 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1006 02:34:07.362373 2333562 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1006 02:34:07.362378 2333562 command_runner.go:130] > # separate_pull_cgroup = ""
	I1006 02:34:07.362386 2333562 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1006 02:34:07.362393 2333562 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1006 02:34:07.362397 2333562 command_runner.go:130] > # will be added.
	I1006 02:34:07.362403 2333562 command_runner.go:130] > # default_capabilities = [
	I1006 02:34:07.362407 2333562 command_runner.go:130] > # 	"CHOWN",
	I1006 02:34:07.362412 2333562 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1006 02:34:07.362416 2333562 command_runner.go:130] > # 	"FSETID",
	I1006 02:34:07.362421 2333562 command_runner.go:130] > # 	"FOWNER",
	I1006 02:34:07.362425 2333562 command_runner.go:130] > # 	"SETGID",
	I1006 02:34:07.362430 2333562 command_runner.go:130] > # 	"SETUID",
	I1006 02:34:07.362436 2333562 command_runner.go:130] > # 	"SETPCAP",
	I1006 02:34:07.362441 2333562 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1006 02:34:07.362446 2333562 command_runner.go:130] > # 	"KILL",
	I1006 02:34:07.362450 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.362459 2333562 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1006 02:34:07.362467 2333562 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1006 02:34:07.362476 2333562 command_runner.go:130] > # add_inheritable_capabilities = true
	I1006 02:34:07.362483 2333562 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1006 02:34:07.362490 2333562 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 02:34:07.362496 2333562 command_runner.go:130] > # default_sysctls = [
	I1006 02:34:07.362499 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.362505 2333562 command_runner.go:130] > # List of devices on the host that a
	I1006 02:34:07.362512 2333562 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1006 02:34:07.362517 2333562 command_runner.go:130] > # allowed_devices = [
	I1006 02:34:07.362522 2333562 command_runner.go:130] > # 	"/dev/fuse",
	I1006 02:34:07.362526 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.362533 2333562 command_runner.go:130] > # List of additional devices. specified as
	I1006 02:34:07.362568 2333562 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1006 02:34:07.362577 2333562 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1006 02:34:07.362584 2333562 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 02:34:07.362589 2333562 command_runner.go:130] > # additional_devices = [
	I1006 02:34:07.362593 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.362599 2333562 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1006 02:34:07.362604 2333562 command_runner.go:130] > # cdi_spec_dirs = [
	I1006 02:34:07.362608 2333562 command_runner.go:130] > # 	"/etc/cdi",
	I1006 02:34:07.362613 2333562 command_runner.go:130] > # 	"/var/run/cdi",
	I1006 02:34:07.362616 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.362624 2333562 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1006 02:34:07.362631 2333562 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1006 02:34:07.362636 2333562 command_runner.go:130] > # Defaults to false.
	I1006 02:34:07.362642 2333562 command_runner.go:130] > # device_ownership_from_security_context = false
	I1006 02:34:07.362649 2333562 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1006 02:34:07.362656 2333562 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1006 02:34:07.362661 2333562 command_runner.go:130] > # hooks_dir = [
	I1006 02:34:07.362666 2333562 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1006 02:34:07.362671 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.362680 2333562 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1006 02:34:07.362688 2333562 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1006 02:34:07.362694 2333562 command_runner.go:130] > # its default mounts from the following two files:
	I1006 02:34:07.362697 2333562 command_runner.go:130] > #
	I1006 02:34:07.362705 2333562 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1006 02:34:07.362712 2333562 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1006 02:34:07.362719 2333562 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1006 02:34:07.362723 2333562 command_runner.go:130] > #
	I1006 02:34:07.362730 2333562 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1006 02:34:07.362737 2333562 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1006 02:34:07.362745 2333562 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1006 02:34:07.362751 2333562 command_runner.go:130] > #      only add mounts it finds in this file.
	I1006 02:34:07.362754 2333562 command_runner.go:130] > #
	I1006 02:34:07.362761 2333562 command_runner.go:130] > # default_mounts_file = ""
	I1006 02:34:07.362768 2333562 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1006 02:34:07.362775 2333562 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1006 02:34:07.362780 2333562 command_runner.go:130] > # pids_limit = 0
	I1006 02:34:07.362787 2333562 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1006 02:34:07.362805 2333562 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1006 02:34:07.362813 2333562 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1006 02:34:07.362824 2333562 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1006 02:34:07.362829 2333562 command_runner.go:130] > # log_size_max = -1
	I1006 02:34:07.362837 2333562 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1006 02:34:07.362842 2333562 command_runner.go:130] > # log_to_journald = false
	I1006 02:34:07.362850 2333562 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1006 02:34:07.362855 2333562 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1006 02:34:07.362861 2333562 command_runner.go:130] > # Path to directory for container attach sockets.
	I1006 02:34:07.362867 2333562 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1006 02:34:07.362873 2333562 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1006 02:34:07.362879 2333562 command_runner.go:130] > # bind_mount_prefix = ""
	I1006 02:34:07.362885 2333562 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1006 02:34:07.362890 2333562 command_runner.go:130] > # read_only = false
	I1006 02:34:07.362897 2333562 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1006 02:34:07.362905 2333562 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1006 02:34:07.362910 2333562 command_runner.go:130] > # live configuration reload.
	I1006 02:34:07.362914 2333562 command_runner.go:130] > # log_level = "info"
	I1006 02:34:07.362923 2333562 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1006 02:34:07.362929 2333562 command_runner.go:130] > # This option supports live configuration reload.
	I1006 02:34:07.362933 2333562 command_runner.go:130] > # log_filter = ""
	I1006 02:34:07.362941 2333562 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1006 02:34:07.362948 2333562 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1006 02:34:07.362952 2333562 command_runner.go:130] > # separated by comma.
	I1006 02:34:07.362957 2333562 command_runner.go:130] > # uid_mappings = ""
	I1006 02:34:07.362964 2333562 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1006 02:34:07.362971 2333562 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1006 02:34:07.362976 2333562 command_runner.go:130] > # separated by comma.
	I1006 02:34:07.362981 2333562 command_runner.go:130] > # gid_mappings = ""
	I1006 02:34:07.362990 2333562 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1006 02:34:07.362997 2333562 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 02:34:07.363004 2333562 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 02:34:07.363009 2333562 command_runner.go:130] > # minimum_mappable_uid = -1
	I1006 02:34:07.363017 2333562 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1006 02:34:07.363024 2333562 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 02:34:07.363031 2333562 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 02:34:07.363037 2333562 command_runner.go:130] > # minimum_mappable_gid = -1
	I1006 02:34:07.363137 2333562 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1006 02:34:07.363164 2333562 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1006 02:34:07.363190 2333562 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1006 02:34:07.363220 2333562 command_runner.go:130] > # ctr_stop_timeout = 30
	I1006 02:34:07.363248 2333562 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1006 02:34:07.363295 2333562 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1006 02:34:07.363329 2333562 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1006 02:34:07.363354 2333562 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1006 02:34:07.363374 2333562 command_runner.go:130] > # drop_infra_ctr = true
	I1006 02:34:07.363398 2333562 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1006 02:34:07.363480 2333562 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1006 02:34:07.363515 2333562 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1006 02:34:07.363541 2333562 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1006 02:34:07.363562 2333562 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1006 02:34:07.363592 2333562 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1006 02:34:07.363619 2333562 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1006 02:34:07.363643 2333562 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1006 02:34:07.363668 2333562 command_runner.go:130] > # pinns_path = ""
	I1006 02:34:07.363701 2333562 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1006 02:34:07.363730 2333562 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1006 02:34:07.363754 2333562 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1006 02:34:07.363774 2333562 command_runner.go:130] > # default_runtime = "runc"
	I1006 02:34:07.363806 2333562 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1006 02:34:07.363834 2333562 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1006 02:34:07.363862 2333562 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1006 02:34:07.363884 2333562 command_runner.go:130] > # creation as a file is not desired either.
	I1006 02:34:07.363918 2333562 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1006 02:34:07.363944 2333562 command_runner.go:130] > # the hostname is being managed dynamically.
	I1006 02:34:07.363965 2333562 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1006 02:34:07.363984 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.364006 2333562 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1006 02:34:07.364041 2333562 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1006 02:34:07.364063 2333562 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1006 02:34:07.364086 2333562 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1006 02:34:07.364113 2333562 command_runner.go:130] > #
	I1006 02:34:07.364141 2333562 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1006 02:34:07.364161 2333562 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1006 02:34:07.364179 2333562 command_runner.go:130] > #  runtime_type = "oci"
	I1006 02:34:07.364214 2333562 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1006 02:34:07.364237 2333562 command_runner.go:130] > #  privileged_without_host_devices = false
	I1006 02:34:07.364255 2333562 command_runner.go:130] > #  allowed_annotations = []
	I1006 02:34:07.364275 2333562 command_runner.go:130] > # Where:
	I1006 02:34:07.364297 2333562 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1006 02:34:07.364328 2333562 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1006 02:34:07.364354 2333562 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1006 02:34:07.364378 2333562 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1006 02:34:07.364397 2333562 command_runner.go:130] > #   in $PATH.
	I1006 02:34:07.364430 2333562 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1006 02:34:07.364452 2333562 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1006 02:34:07.364474 2333562 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1006 02:34:07.364495 2333562 command_runner.go:130] > #   state.
	I1006 02:34:07.364530 2333562 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1006 02:34:07.364559 2333562 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1006 02:34:07.364586 2333562 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1006 02:34:07.364606 2333562 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1006 02:34:07.364640 2333562 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1006 02:34:07.364666 2333562 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1006 02:34:07.364689 2333562 command_runner.go:130] > #   The currently recognized values are:
	I1006 02:34:07.364727 2333562 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1006 02:34:07.364751 2333562 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1006 02:34:07.364775 2333562 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1006 02:34:07.364807 2333562 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1006 02:34:07.364833 2333562 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1006 02:34:07.364857 2333562 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1006 02:34:07.364894 2333562 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1006 02:34:07.364918 2333562 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1006 02:34:07.364939 2333562 command_runner.go:130] > #   should be moved to the container's cgroup
	I1006 02:34:07.364969 2333562 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1006 02:34:07.364991 2333562 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1006 02:34:07.365012 2333562 command_runner.go:130] > runtime_type = "oci"
	I1006 02:34:07.365043 2333562 command_runner.go:130] > runtime_root = "/run/runc"
	I1006 02:34:07.365078 2333562 command_runner.go:130] > runtime_config_path = ""
	I1006 02:34:07.365099 2333562 command_runner.go:130] > monitor_path = ""
	I1006 02:34:07.365118 2333562 command_runner.go:130] > monitor_cgroup = ""
	I1006 02:34:07.365148 2333562 command_runner.go:130] > monitor_exec_cgroup = ""
	I1006 02:34:07.365326 2333562 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1006 02:34:07.365362 2333562 command_runner.go:130] > # running containers
	I1006 02:34:07.365382 2333562 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1006 02:34:07.365405 2333562 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1006 02:34:07.365438 2333562 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1006 02:34:07.365536 2333562 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1006 02:34:07.365556 2333562 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1006 02:34:07.365578 2333562 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1006 02:34:07.365611 2333562 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1006 02:34:07.365632 2333562 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1006 02:34:07.365661 2333562 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1006 02:34:07.365689 2333562 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1006 02:34:07.365713 2333562 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1006 02:34:07.365734 2333562 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1006 02:34:07.365769 2333562 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1006 02:34:07.365794 2333562 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1006 02:34:07.365821 2333562 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1006 02:34:07.366174 2333562 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1006 02:34:07.366280 2333562 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1006 02:34:07.366304 2333562 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1006 02:34:07.366327 2333562 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1006 02:34:07.366367 2333562 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1006 02:34:07.366386 2333562 command_runner.go:130] > # Example:
	I1006 02:34:07.366410 2333562 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1006 02:34:07.366442 2333562 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1006 02:34:07.366463 2333562 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1006 02:34:07.366485 2333562 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1006 02:34:07.366513 2333562 command_runner.go:130] > # cpuset = 0
	I1006 02:34:07.366540 2333562 command_runner.go:130] > # cpushares = "0-1"
	I1006 02:34:07.366559 2333562 command_runner.go:130] > # Where:
	I1006 02:34:07.366580 2333562 command_runner.go:130] > # The workload name is workload-type.
	I1006 02:34:07.366615 2333562 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1006 02:34:07.366649 2333562 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1006 02:34:07.366670 2333562 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1006 02:34:07.366692 2333562 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1006 02:34:07.366727 2333562 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1006 02:34:07.366746 2333562 command_runner.go:130] > # 
	I1006 02:34:07.366770 2333562 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1006 02:34:07.366805 2333562 command_runner.go:130] > #
	I1006 02:34:07.366828 2333562 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1006 02:34:07.366849 2333562 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1006 02:34:07.366886 2333562 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1006 02:34:07.366909 2333562 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1006 02:34:07.366931 2333562 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1006 02:34:07.366958 2333562 command_runner.go:130] > [crio.image]
	I1006 02:34:07.366981 2333562 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1006 02:34:07.367002 2333562 command_runner.go:130] > # default_transport = "docker://"
	I1006 02:34:07.367034 2333562 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1006 02:34:07.367072 2333562 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1006 02:34:07.367103 2333562 command_runner.go:130] > # global_auth_file = ""
	I1006 02:34:07.367137 2333562 command_runner.go:130] > # The image used to instantiate infra containers.
	I1006 02:34:07.367146 2333562 command_runner.go:130] > # This option supports live configuration reload.
	I1006 02:34:07.367156 2333562 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1006 02:34:07.367165 2333562 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1006 02:34:07.367173 2333562 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1006 02:34:07.367183 2333562 command_runner.go:130] > # This option supports live configuration reload.
	I1006 02:34:07.367189 2333562 command_runner.go:130] > # pause_image_auth_file = ""
	I1006 02:34:07.367198 2333562 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1006 02:34:07.367211 2333562 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1006 02:34:07.367220 2333562 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1006 02:34:07.367227 2333562 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1006 02:34:07.367232 2333562 command_runner.go:130] > # pause_command = "/pause"
	I1006 02:34:07.367239 2333562 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1006 02:34:07.367250 2333562 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1006 02:34:07.367258 2333562 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1006 02:34:07.367268 2333562 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1006 02:34:07.367279 2333562 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1006 02:34:07.367288 2333562 command_runner.go:130] > # signature_policy = ""
	I1006 02:34:07.367304 2333562 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1006 02:34:07.367315 2333562 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1006 02:34:07.367320 2333562 command_runner.go:130] > # changing them here.
	I1006 02:34:07.367329 2333562 command_runner.go:130] > # insecure_registries = [
	I1006 02:34:07.367337 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.367345 2333562 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1006 02:34:07.367353 2333562 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1006 02:34:07.367362 2333562 command_runner.go:130] > # image_volumes = "mkdir"
	I1006 02:34:07.367369 2333562 command_runner.go:130] > # Temporary directory to use for storing big files
	I1006 02:34:07.367376 2333562 command_runner.go:130] > # big_files_temporary_dir = ""
	I1006 02:34:07.367383 2333562 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1006 02:34:07.367391 2333562 command_runner.go:130] > # CNI plugins.
	I1006 02:34:07.367396 2333562 command_runner.go:130] > [crio.network]
	I1006 02:34:07.367403 2333562 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1006 02:34:07.367410 2333562 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1006 02:34:07.367415 2333562 command_runner.go:130] > # cni_default_network = ""
	I1006 02:34:07.367424 2333562 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1006 02:34:07.367430 2333562 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1006 02:34:07.367441 2333562 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1006 02:34:07.367449 2333562 command_runner.go:130] > # plugin_dirs = [
	I1006 02:34:07.367454 2333562 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1006 02:34:07.367458 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.367467 2333562 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1006 02:34:07.367474 2333562 command_runner.go:130] > [crio.metrics]
	I1006 02:34:07.367480 2333562 command_runner.go:130] > # Globally enable or disable metrics support.
	I1006 02:34:07.367485 2333562 command_runner.go:130] > # enable_metrics = false
	I1006 02:34:07.367490 2333562 command_runner.go:130] > # Specify enabled metrics collectors.
	I1006 02:34:07.367496 2333562 command_runner.go:130] > # Per default all metrics are enabled.
	I1006 02:34:07.367509 2333562 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1006 02:34:07.367517 2333562 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1006 02:34:07.367527 2333562 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1006 02:34:07.367532 2333562 command_runner.go:130] > # metrics_collectors = [
	I1006 02:34:07.367539 2333562 command_runner.go:130] > # 	"operations",
	I1006 02:34:07.367545 2333562 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1006 02:34:07.367551 2333562 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1006 02:34:07.367558 2333562 command_runner.go:130] > # 	"operations_errors",
	I1006 02:34:07.367565 2333562 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1006 02:34:07.367570 2333562 command_runner.go:130] > # 	"image_pulls_by_name",
	I1006 02:34:07.367575 2333562 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1006 02:34:07.367581 2333562 command_runner.go:130] > # 	"image_pulls_failures",
	I1006 02:34:07.367588 2333562 command_runner.go:130] > # 	"image_pulls_successes",
	I1006 02:34:07.367593 2333562 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1006 02:34:07.367598 2333562 command_runner.go:130] > # 	"image_layer_reuse",
	I1006 02:34:07.367606 2333562 command_runner.go:130] > # 	"containers_oom_total",
	I1006 02:34:07.367610 2333562 command_runner.go:130] > # 	"containers_oom",
	I1006 02:34:07.367615 2333562 command_runner.go:130] > # 	"processes_defunct",
	I1006 02:34:07.367623 2333562 command_runner.go:130] > # 	"operations_total",
	I1006 02:34:07.367628 2333562 command_runner.go:130] > # 	"operations_latency_seconds",
	I1006 02:34:07.367635 2333562 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1006 02:34:07.367644 2333562 command_runner.go:130] > # 	"operations_errors_total",
	I1006 02:34:07.367649 2333562 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1006 02:34:07.367654 2333562 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1006 02:34:07.367660 2333562 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1006 02:34:07.367665 2333562 command_runner.go:130] > # 	"image_pulls_success_total",
	I1006 02:34:07.367675 2333562 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1006 02:34:07.367681 2333562 command_runner.go:130] > # 	"containers_oom_count_total",
	I1006 02:34:07.367687 2333562 command_runner.go:130] > # ]
	I1006 02:34:07.367693 2333562 command_runner.go:130] > # The port on which the metrics server will listen.
	I1006 02:34:07.367698 2333562 command_runner.go:130] > # metrics_port = 9090
	I1006 02:34:07.367707 2333562 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1006 02:34:07.367712 2333562 command_runner.go:130] > # metrics_socket = ""
	I1006 02:34:07.367721 2333562 command_runner.go:130] > # The certificate for the secure metrics server.
	I1006 02:34:07.367729 2333562 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1006 02:34:07.367736 2333562 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1006 02:34:07.367748 2333562 command_runner.go:130] > # certificate on any modification event.
	I1006 02:34:07.367754 2333562 command_runner.go:130] > # metrics_cert = ""
	I1006 02:34:07.367762 2333562 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1006 02:34:07.367770 2333562 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1006 02:34:07.367775 2333562 command_runner.go:130] > # metrics_key = ""
	I1006 02:34:07.367782 2333562 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1006 02:34:07.367791 2333562 command_runner.go:130] > [crio.tracing]
	I1006 02:34:07.367798 2333562 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1006 02:34:07.367809 2333562 command_runner.go:130] > # enable_tracing = false
	I1006 02:34:07.367816 2333562 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1006 02:34:07.367821 2333562 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1006 02:34:07.367828 2333562 command_runner.go:130] > # Number of samples to collect per million spans.
	I1006 02:34:07.367836 2333562 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1006 02:34:07.367843 2333562 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1006 02:34:07.367850 2333562 command_runner.go:130] > [crio.stats]
	I1006 02:34:07.367857 2333562 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1006 02:34:07.367864 2333562 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1006 02:34:07.367872 2333562 command_runner.go:130] > # stats_collection_period = 0
	I1006 02:34:07.367952 2333562 cni.go:84] Creating CNI manager for ""
	I1006 02:34:07.367965 2333562 cni.go:136] 1 nodes found, recommending kindnet
	I1006 02:34:07.367994 2333562 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1006 02:34:07.368014 2333562 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-951739 NodeName:multinode-951739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 02:34:07.368153 2333562 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-951739"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 02:34:07.368227 2333562 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-951739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-951739 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1006 02:34:07.368299 2333562 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1006 02:34:07.377988 2333562 command_runner.go:130] > kubeadm
	I1006 02:34:07.378007 2333562 command_runner.go:130] > kubectl
	I1006 02:34:07.378012 2333562 command_runner.go:130] > kubelet
	I1006 02:34:07.379185 2333562 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 02:34:07.379262 2333562 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 02:34:07.389524 2333562 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1006 02:34:07.410789 2333562 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 02:34:07.431689 2333562 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1006 02:34:07.452577 2333562 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1006 02:34:07.457042 2333562 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 02:34:07.470213 2333562 certs.go:56] Setting up /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739 for IP: 192.168.58.2
	I1006 02:34:07.470242 2333562 certs.go:190] acquiring lock for shared ca certs: {Name:mk4532656c287f04abff160cc5263fddcb69ac4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:34:07.470370 2333562 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key
	I1006 02:34:07.470416 2333562 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key
	I1006 02:34:07.470480 2333562 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.key
	I1006 02:34:07.470494 2333562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.crt with IP's: []
	I1006 02:34:08.437143 2333562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.crt ...
	I1006 02:34:08.437175 2333562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.crt: {Name:mk90134bf0afc6312769e9db34468da2beba92ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:34:08.437395 2333562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.key ...
	I1006 02:34:08.437408 2333562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.key: {Name:mkfff282bab0928c1cf9c07b0dfe4ae789b6bce1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:34:08.437508 2333562 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.key.cee25041
	I1006 02:34:08.437523 2333562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1006 02:34:08.653545 2333562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.crt.cee25041 ...
	I1006 02:34:08.653577 2333562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.crt.cee25041: {Name:mk02382faf378a41ddcde9df119d2acb0c3ec79f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:34:08.653758 2333562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.key.cee25041 ...
	I1006 02:34:08.653771 2333562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.key.cee25041: {Name:mka739a677862830502419dc456af4ac8f265a45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:34:08.653863 2333562 certs.go:337] copying /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.crt
	I1006 02:34:08.653943 2333562 certs.go:341] copying /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.key
	I1006 02:34:08.654008 2333562 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/proxy-client.key
	I1006 02:34:08.654024 2333562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/proxy-client.crt with IP's: []
	I1006 02:34:09.096956 2333562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/proxy-client.crt ...
	I1006 02:34:09.096988 2333562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/proxy-client.crt: {Name:mked3502409804826142edc1a78920f47c169b6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:34:09.097177 2333562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/proxy-client.key ...
	I1006 02:34:09.097190 2333562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/proxy-client.key: {Name:mkd6ca1fa1edc2d03324886f7eec28f2a15585b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:34:09.097276 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1006 02:34:09.097296 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1006 02:34:09.097315 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1006 02:34:09.097341 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1006 02:34:09.097353 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 02:34:09.097365 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 02:34:09.097380 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 02:34:09.097391 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 02:34:09.097451 2333562 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem (1338 bytes)
	W1006 02:34:09.097493 2333562 certs.go:433] ignoring /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306_empty.pem, impossibly tiny 0 bytes
	I1006 02:34:09.097507 2333562 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 02:34:09.097532 2333562 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem (1082 bytes)
	I1006 02:34:09.097567 2333562 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem (1123 bytes)
	I1006 02:34:09.097598 2333562 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem (1675 bytes)
	I1006 02:34:09.097653 2333562 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:34:09.097686 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> /usr/share/ca-certificates/22683062.pem
	I1006 02:34:09.097704 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:34:09.097714 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem -> /usr/share/ca-certificates/2268306.pem
	I1006 02:34:09.098319 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1006 02:34:09.127635 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 02:34:09.156441 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 02:34:09.184797 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 02:34:09.214161 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 02:34:09.242175 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 02:34:09.270691 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 02:34:09.298350 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 02:34:09.326725 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /usr/share/ca-certificates/22683062.pem (1708 bytes)
	I1006 02:34:09.355262 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 02:34:09.382927 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem --> /usr/share/ca-certificates/2268306.pem (1338 bytes)
	I1006 02:34:09.411822 2333562 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 02:34:09.432858 2333562 ssh_runner.go:195] Run: openssl version
	I1006 02:34:09.439600 2333562 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1006 02:34:09.439923 2333562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22683062.pem && ln -fs /usr/share/ca-certificates/22683062.pem /etc/ssl/certs/22683062.pem"
	I1006 02:34:09.452276 2333562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22683062.pem
	I1006 02:34:09.456850 2333562 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  6 02:19 /usr/share/ca-certificates/22683062.pem
	I1006 02:34:09.456878 2333562 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  6 02:19 /usr/share/ca-certificates/22683062.pem
	I1006 02:34:09.456929 2333562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22683062.pem
	I1006 02:34:09.465194 2333562 command_runner.go:130] > 3ec20f2e
	I1006 02:34:09.465550 2333562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22683062.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 02:34:09.476794 2333562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 02:34:09.488226 2333562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:34:09.493382 2333562 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  6 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:34:09.493664 2333562 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  6 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:34:09.493733 2333562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:34:09.502094 2333562 command_runner.go:130] > b5213941
	I1006 02:34:09.502364 2333562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 02:34:09.514050 2333562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2268306.pem && ln -fs /usr/share/ca-certificates/2268306.pem /etc/ssl/certs/2268306.pem"
	I1006 02:34:09.526255 2333562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2268306.pem
	I1006 02:34:09.531308 2333562 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  6 02:19 /usr/share/ca-certificates/2268306.pem
	I1006 02:34:09.531661 2333562 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  6 02:19 /usr/share/ca-certificates/2268306.pem
	I1006 02:34:09.531740 2333562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2268306.pem
	I1006 02:34:09.540129 2333562 command_runner.go:130] > 51391683
	I1006 02:34:09.540530 2333562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2268306.pem /etc/ssl/certs/51391683.0"
	I1006 02:34:09.553702 2333562 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1006 02:34:09.558290 2333562 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1006 02:34:09.558358 2333562 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1006 02:34:09.558412 2333562 kubeadm.go:404] StartCluster: {Name:multinode-951739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-951739 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:34:09.558513 2333562 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 02:34:09.558582 2333562 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 02:34:09.601378 2333562 cri.go:89] found id: ""
	I1006 02:34:09.601452 2333562 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 02:34:09.612471 2333562 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1006 02:34:09.612552 2333562 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1006 02:34:09.612568 2333562 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1006 02:34:09.612644 2333562 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 02:34:09.623504 2333562 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1006 02:34:09.623612 2333562 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 02:34:09.635296 2333562 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1006 02:34:09.635323 2333562 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1006 02:34:09.635332 2333562 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1006 02:34:09.635341 2333562 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 02:34:09.635395 2333562 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 02:34:09.635448 2333562 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 02:34:09.688291 2333562 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1006 02:34:09.688319 2333562 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I1006 02:34:09.688476 2333562 kubeadm.go:322] [preflight] Running pre-flight checks
	I1006 02:34:09.688496 2333562 command_runner.go:130] > [preflight] Running pre-flight checks
	I1006 02:34:09.736051 2333562 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1006 02:34:09.736136 2333562 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1006 02:34:09.736248 2333562 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1006 02:34:09.736280 2333562 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-aws
	I1006 02:34:09.736362 2333562 kubeadm.go:322] OS: Linux
	I1006 02:34:09.736387 2333562 command_runner.go:130] > OS: Linux
	I1006 02:34:09.736468 2333562 kubeadm.go:322] CGROUPS_CPU: enabled
	I1006 02:34:09.736494 2333562 command_runner.go:130] > CGROUPS_CPU: enabled
	I1006 02:34:09.736569 2333562 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1006 02:34:09.736595 2333562 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1006 02:34:09.736684 2333562 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1006 02:34:09.736708 2333562 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1006 02:34:09.736788 2333562 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1006 02:34:09.736814 2333562 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1006 02:34:09.736901 2333562 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1006 02:34:09.736925 2333562 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1006 02:34:09.737015 2333562 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1006 02:34:09.737046 2333562 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1006 02:34:09.737123 2333562 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1006 02:34:09.737147 2333562 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1006 02:34:09.737227 2333562 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1006 02:34:09.737253 2333562 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1006 02:34:09.737329 2333562 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1006 02:34:09.737362 2333562 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1006 02:34:09.822452 2333562 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 02:34:09.822523 2333562 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 02:34:09.822644 2333562 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 02:34:09.822678 2333562 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 02:34:09.822803 2333562 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1006 02:34:09.822835 2333562 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1006 02:34:10.068774 2333562 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 02:34:10.071147 2333562 out.go:204]   - Generating certificates and keys ...
	I1006 02:34:10.068851 2333562 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 02:34:10.071397 2333562 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1006 02:34:10.071435 2333562 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1006 02:34:10.071542 2333562 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1006 02:34:10.071591 2333562 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1006 02:34:10.445059 2333562 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 02:34:10.445139 2333562 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 02:34:10.758532 2333562 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1006 02:34:10.758617 2333562 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1006 02:34:10.909729 2333562 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1006 02:34:10.909753 2333562 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1006 02:34:11.406579 2333562 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1006 02:34:11.406607 2333562 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1006 02:34:12.198911 2333562 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1006 02:34:12.198967 2333562 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1006 02:34:12.199393 2333562 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-951739] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1006 02:34:12.199450 2333562 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-951739] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1006 02:34:12.788314 2333562 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1006 02:34:12.788352 2333562 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1006 02:34:12.788643 2333562 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-951739] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1006 02:34:12.788654 2333562 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-951739] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1006 02:34:13.374320 2333562 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 02:34:13.374347 2333562 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 02:34:13.693532 2333562 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 02:34:13.693561 2333562 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 02:34:14.083570 2333562 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1006 02:34:14.083598 2333562 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1006 02:34:14.083893 2333562 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 02:34:14.083929 2333562 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 02:34:14.243502 2333562 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 02:34:14.243538 2333562 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 02:34:14.695532 2333562 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 02:34:14.695562 2333562 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 02:34:14.942477 2333562 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 02:34:14.942514 2333562 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 02:34:15.345237 2333562 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 02:34:15.345266 2333562 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 02:34:15.345879 2333562 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 02:34:15.345898 2333562 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 02:34:15.348785 2333562 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 02:34:15.351015 2333562 out.go:204]   - Booting up control plane ...
	I1006 02:34:15.348869 2333562 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 02:34:15.351132 2333562 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 02:34:15.351150 2333562 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 02:34:15.351264 2333562 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 02:34:15.351278 2333562 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 02:34:15.353882 2333562 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 02:34:15.353902 2333562 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 02:34:15.367438 2333562 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 02:34:15.367465 2333562 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 02:34:15.368261 2333562 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 02:34:15.368284 2333562 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 02:34:15.368535 2333562 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1006 02:34:15.368550 2333562 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1006 02:34:15.472710 2333562 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1006 02:34:15.472736 2333562 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1006 02:34:22.974315 2333562 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502096 seconds
	I1006 02:34:22.974339 2333562 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.502096 seconds
	I1006 02:34:22.974439 2333562 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 02:34:22.974461 2333562 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 02:34:22.989731 2333562 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 02:34:22.989758 2333562 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 02:34:23.517229 2333562 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 02:34:23.517270 2333562 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1006 02:34:23.517472 2333562 kubeadm.go:322] [mark-control-plane] Marking the node multinode-951739 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 02:34:23.517490 2333562 command_runner.go:130] > [mark-control-plane] Marking the node multinode-951739 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 02:34:24.031335 2333562 kubeadm.go:322] [bootstrap-token] Using token: 1wb2h5.1mp76h0jwc828ynp
	I1006 02:34:24.033471 2333562 out.go:204]   - Configuring RBAC rules ...
	I1006 02:34:24.031441 2333562 command_runner.go:130] > [bootstrap-token] Using token: 1wb2h5.1mp76h0jwc828ynp
	I1006 02:34:24.033599 2333562 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 02:34:24.033615 2333562 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 02:34:24.039553 2333562 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 02:34:24.039576 2333562 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 02:34:24.048366 2333562 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 02:34:24.048391 2333562 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 02:34:24.052672 2333562 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 02:34:24.052698 2333562 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 02:34:24.056682 2333562 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 02:34:24.056703 2333562 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 02:34:24.061820 2333562 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 02:34:24.061845 2333562 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 02:34:24.076071 2333562 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 02:34:24.076097 2333562 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 02:34:24.334136 2333562 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1006 02:34:24.334161 2333562 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1006 02:34:24.445673 2333562 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1006 02:34:24.445699 2333562 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1006 02:34:24.445721 2333562 kubeadm.go:322] 
	I1006 02:34:24.445778 2333562 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1006 02:34:24.445787 2333562 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1006 02:34:24.445792 2333562 kubeadm.go:322] 
	I1006 02:34:24.445873 2333562 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1006 02:34:24.445882 2333562 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1006 02:34:24.445887 2333562 kubeadm.go:322] 
	I1006 02:34:24.445911 2333562 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1006 02:34:24.445918 2333562 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1006 02:34:24.445973 2333562 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 02:34:24.445981 2333562 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 02:34:24.446028 2333562 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 02:34:24.446036 2333562 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 02:34:24.446041 2333562 kubeadm.go:322] 
	I1006 02:34:24.446092 2333562 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1006 02:34:24.446100 2333562 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1006 02:34:24.446104 2333562 kubeadm.go:322] 
	I1006 02:34:24.446149 2333562 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 02:34:24.446157 2333562 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 02:34:24.446162 2333562 kubeadm.go:322] 
	I1006 02:34:24.446210 2333562 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1006 02:34:24.446218 2333562 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1006 02:34:24.446288 2333562 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 02:34:24.446297 2333562 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 02:34:24.446360 2333562 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 02:34:24.446368 2333562 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 02:34:24.446373 2333562 kubeadm.go:322] 
	I1006 02:34:24.446452 2333562 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 02:34:24.446460 2333562 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1006 02:34:24.446540 2333562 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1006 02:34:24.446548 2333562 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1006 02:34:24.446552 2333562 kubeadm.go:322] 
	I1006 02:34:24.446630 2333562 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1wb2h5.1mp76h0jwc828ynp \
	I1006 02:34:24.446637 2333562 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 1wb2h5.1mp76h0jwc828ynp \
	I1006 02:34:24.446732 2333562 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:80472cab419bab12b6f684b1710332d34cec39ebaa3946bc81d26724ddc88d38 \
	I1006 02:34:24.446740 2333562 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:80472cab419bab12b6f684b1710332d34cec39ebaa3946bc81d26724ddc88d38 \
	I1006 02:34:24.446763 2333562 kubeadm.go:322] 	--control-plane 
	I1006 02:34:24.446771 2333562 command_runner.go:130] > 	--control-plane 
	I1006 02:34:24.446775 2333562 kubeadm.go:322] 
	I1006 02:34:24.446855 2333562 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1006 02:34:24.446863 2333562 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1006 02:34:24.446867 2333562 kubeadm.go:322] 
	I1006 02:34:24.446944 2333562 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1wb2h5.1mp76h0jwc828ynp \
	I1006 02:34:24.446952 2333562 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 1wb2h5.1mp76h0jwc828ynp \
	I1006 02:34:24.447062 2333562 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:80472cab419bab12b6f684b1710332d34cec39ebaa3946bc81d26724ddc88d38 
	I1006 02:34:24.447069 2333562 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:80472cab419bab12b6f684b1710332d34cec39ebaa3946bc81d26724ddc88d38 
	I1006 02:34:24.451386 2333562 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1006 02:34:24.451411 2333562 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1006 02:34:24.451511 2333562 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 02:34:24.451521 2333562 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 02:34:24.451533 2333562 cni.go:84] Creating CNI manager for ""
	I1006 02:34:24.451539 2333562 cni.go:136] 1 nodes found, recommending kindnet
	I1006 02:34:24.454161 2333562 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1006 02:34:24.456518 2333562 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 02:34:24.471574 2333562 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1006 02:34:24.471595 2333562 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1006 02:34:24.471611 2333562 command_runner.go:130] > Device: 38h/56d	Inode: 1826972     Links: 1
	I1006 02:34:24.471621 2333562 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 02:34:24.471628 2333562 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1006 02:34:24.471637 2333562 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1006 02:34:24.471644 2333562 command_runner.go:130] > Change: 2023-10-06 02:11:32.600474282 +0000
	I1006 02:34:24.471656 2333562 command_runner.go:130] >  Birth: 2023-10-06 02:11:32.556475164 +0000
	I1006 02:34:24.472299 2333562 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1006 02:34:24.472316 2333562 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1006 02:34:24.514111 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 02:34:25.409152 2333562 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1006 02:34:25.420597 2333562 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1006 02:34:25.429149 2333562 command_runner.go:130] > serviceaccount/kindnet created
	I1006 02:34:25.441782 2333562 command_runner.go:130] > daemonset.apps/kindnet created
	I1006 02:34:25.447610 2333562 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 02:34:25.447752 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:25.447830 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=84890cb24d0240d9d992d7c7712ee519ceed4154 minikube.k8s.io/name=multinode-951739 minikube.k8s.io/updated_at=2023_10_06T02_34_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:25.614844 2333562 command_runner.go:130] > node/multinode-951739 labeled
	I1006 02:34:25.618685 2333562 command_runner.go:130] > -16
	I1006 02:34:25.618713 2333562 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1006 02:34:25.618736 2333562 ops.go:34] apiserver oom_adj: -16
	I1006 02:34:25.618807 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:25.720905 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:25.721000 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:25.817336 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:26.321894 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:26.414504 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:26.822210 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:26.907916 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:27.322274 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:27.414724 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:27.821608 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:27.910315 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:28.322023 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:28.420229 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:28.821627 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:28.916887 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:29.321352 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:29.421183 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:29.822043 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:29.915008 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:30.321812 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:30.410901 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:30.822281 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:30.912470 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:31.322071 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:31.417013 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:31.821347 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:31.916698 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:32.322282 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:32.415083 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:32.821334 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:32.912778 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:33.321316 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:33.417636 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:33.822213 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:33.924999 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:34.321492 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:34.423450 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:34.821946 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:34.913739 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:35.321355 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:35.421264 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:35.822057 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:35.933254 2333562 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1006 02:34:36.321948 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 02:34:36.434050 2333562 command_runner.go:130] > NAME      SECRETS   AGE
	I1006 02:34:36.434943 2333562 command_runner.go:130] > default   0         0s
	I1006 02:34:36.438397 2333562 kubeadm.go:1081] duration metric: took 10.990686767s to wait for elevateKubeSystemPrivileges.
	I1006 02:34:36.438419 2333562 kubeadm.go:406] StartCluster complete in 26.880012135s
	I1006 02:34:36.438443 2333562 settings.go:142] acquiring lock: {Name:mkbf8759b61c125112e0d07f4c53bb4e84a6de4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:34:36.438502 2333562 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:34:36.439240 2333562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/kubeconfig: {Name:mkf2ae4867a3638ac11dac5beae6919a4f83b43a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:34:36.439741 2333562 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:34:36.439984 2333562 kapi.go:59] client config for multinode-951739: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.crt", KeyFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.key", CAFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a24a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 02:34:36.441140 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1006 02:34:36.441150 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:36.441158 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:36.441165 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:36.441377 2333562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 02:34:36.441606 2333562 config.go:182] Loaded profile config "multinode-951739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:34:36.441638 2333562 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1006 02:34:36.441693 2333562 addons.go:69] Setting storage-provisioner=true in profile "multinode-951739"
	I1006 02:34:36.441708 2333562 addons.go:231] Setting addon storage-provisioner=true in "multinode-951739"
	I1006 02:34:36.441751 2333562 host.go:66] Checking if "multinode-951739" exists ...
	I1006 02:34:36.442192 2333562 cli_runner.go:164] Run: docker container inspect multinode-951739 --format={{.State.Status}}
	I1006 02:34:36.442844 2333562 cert_rotation.go:137] Starting client certificate rotation controller
	I1006 02:34:36.442877 2333562 addons.go:69] Setting default-storageclass=true in profile "multinode-951739"
	I1006 02:34:36.442897 2333562 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-951739"
	I1006 02:34:36.443178 2333562 cli_runner.go:164] Run: docker container inspect multinode-951739 --format={{.State.Status}}
	I1006 02:34:36.501257 2333562 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 02:34:36.502153 2333562 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:34:36.503355 2333562 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 02:34:36.503370 2333562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 02:34:36.503431 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739
	I1006 02:34:36.503655 2333562 kapi.go:59] client config for multinode-951739: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.crt", KeyFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.key", CAFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a24a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 02:34:36.503971 2333562 addons.go:231] Setting addon default-storageclass=true in "multinode-951739"
	I1006 02:34:36.504002 2333562 host.go:66] Checking if "multinode-951739" exists ...
	I1006 02:34:36.504493 2333562 cli_runner.go:164] Run: docker container inspect multinode-951739 --format={{.State.Status}}
	I1006 02:34:36.545768 2333562 round_trippers.go:574] Response Status: 200 OK in 104 milliseconds
	I1006 02:34:36.545789 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:36.545797 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:36.545803 2333562 round_trippers.go:580]     Content-Length: 291
	I1006 02:34:36.545809 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:36 GMT
	I1006 02:34:36.545816 2333562 round_trippers.go:580]     Audit-Id: 17351cbd-0d64-4552-b5bb-99799ac16cb5
	I1006 02:34:36.545822 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:36.545828 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:36.545833 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:36.549639 2333562 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c91a07be-980f-49fe-875d-9d42fad520cd","resourceVersion":"344","creationTimestamp":"2023-10-06T02:34:24Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1006 02:34:36.550070 2333562 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c91a07be-980f-49fe-875d-9d42fad520cd","resourceVersion":"344","creationTimestamp":"2023-10-06T02:34:24Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1006 02:34:36.550120 2333562 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1006 02:34:36.550129 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:36.550139 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:36.550148 2333562 round_trippers.go:473]     Content-Type: application/json
	I1006 02:34:36.550155 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:36.554957 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35339 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739/id_rsa Username:docker}
	I1006 02:34:36.562423 2333562 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 02:34:36.562444 2333562 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 02:34:36.562507 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739
	I1006 02:34:36.593122 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35339 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739/id_rsa Username:docker}
	I1006 02:34:36.642595 2333562 round_trippers.go:574] Response Status: 200 OK in 92 milliseconds
	I1006 02:34:36.642664 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:36.642687 2333562 round_trippers.go:580]     Content-Length: 291
	I1006 02:34:36.642746 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:36 GMT
	I1006 02:34:36.642771 2333562 round_trippers.go:580]     Audit-Id: b3efb8fa-77ad-4698-921b-5efc276cabc8
	I1006 02:34:36.642790 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:36.642824 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:36.642850 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:36.642871 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:36.642935 2333562 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c91a07be-980f-49fe-875d-9d42fad520cd","resourceVersion":"346","creationTimestamp":"2023-10-06T02:34:24Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1006 02:34:36.643143 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1006 02:34:36.643173 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:36.643195 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:36.643228 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:36.663333 2333562 command_runner.go:130] > apiVersion: v1
	I1006 02:34:36.663399 2333562 command_runner.go:130] > data:
	I1006 02:34:36.663418 2333562 command_runner.go:130] >   Corefile: |
	I1006 02:34:36.663469 2333562 command_runner.go:130] >     .:53 {
	I1006 02:34:36.663493 2333562 command_runner.go:130] >         errors
	I1006 02:34:36.663515 2333562 command_runner.go:130] >         health {
	I1006 02:34:36.663536 2333562 command_runner.go:130] >            lameduck 5s
	I1006 02:34:36.663569 2333562 command_runner.go:130] >         }
	I1006 02:34:36.663587 2333562 command_runner.go:130] >         ready
	I1006 02:34:36.663610 2333562 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1006 02:34:36.663644 2333562 command_runner.go:130] >            pods insecure
	I1006 02:34:36.663670 2333562 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1006 02:34:36.663691 2333562 command_runner.go:130] >            ttl 30
	I1006 02:34:36.663726 2333562 command_runner.go:130] >         }
	I1006 02:34:36.663748 2333562 command_runner.go:130] >         prometheus :9153
	I1006 02:34:36.663769 2333562 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1006 02:34:36.663803 2333562 command_runner.go:130] >            max_concurrent 1000
	I1006 02:34:36.663825 2333562 command_runner.go:130] >         }
	I1006 02:34:36.663847 2333562 command_runner.go:130] >         cache 30
	I1006 02:34:36.663879 2333562 command_runner.go:130] >         loop
	I1006 02:34:36.663901 2333562 command_runner.go:130] >         reload
	I1006 02:34:36.663923 2333562 command_runner.go:130] >         loadbalance
	I1006 02:34:36.663955 2333562 command_runner.go:130] >     }
	I1006 02:34:36.663977 2333562 command_runner.go:130] > kind: ConfigMap
	I1006 02:34:36.663996 2333562 command_runner.go:130] > metadata:
	I1006 02:34:36.664037 2333562 command_runner.go:130] >   creationTimestamp: "2023-10-06T02:34:24Z"
	I1006 02:34:36.664061 2333562 command_runner.go:130] >   name: coredns
	I1006 02:34:36.664082 2333562 command_runner.go:130] >   namespace: kube-system
	I1006 02:34:36.664116 2333562 command_runner.go:130] >   resourceVersion: "259"
	I1006 02:34:36.664142 2333562 command_runner.go:130] >   uid: c799aca6-d976-409d-a6d1-f9925c74c0c4
	I1006 02:34:36.666551 2333562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 02:34:36.696054 2333562 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I1006 02:34:36.696073 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:36.696081 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:36.696088 2333562 round_trippers.go:580]     Content-Length: 291
	I1006 02:34:36.696094 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:36 GMT
	I1006 02:34:36.696101 2333562 round_trippers.go:580]     Audit-Id: e6e6fa0f-83fe-46d2-8613-32b58b41f3f6
	I1006 02:34:36.696108 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:36.696127 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:36.696138 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:36.697742 2333562 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c91a07be-980f-49fe-875d-9d42fad520cd","resourceVersion":"346","creationTimestamp":"2023-10-06T02:34:24Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1006 02:34:36.697860 2333562 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-951739" context rescaled to 1 replicas
	I1006 02:34:36.697890 2333562 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 02:34:36.700262 2333562 out.go:177] * Verifying Kubernetes components...
	I1006 02:34:36.702161 2333562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:34:36.774304 2333562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 02:34:36.790314 2333562 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 02:34:37.348210 2333562 command_runner.go:130] > configmap/coredns replaced
	I1006 02:34:37.355534 2333562 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1006 02:34:37.355926 2333562 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:34:37.356188 2333562 kapi.go:59] client config for multinode-951739: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.crt", KeyFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.key", CAFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a24a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 02:34:37.356438 2333562 node_ready.go:35] waiting up to 6m0s for node "multinode-951739" to be "Ready" ...
	I1006 02:34:37.356499 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:37.356504 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:37.356512 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:37.356519 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:37.364918 2333562 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1006 02:34:37.364979 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:37.365002 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:37 GMT
	I1006 02:34:37.365024 2333562 round_trippers.go:580]     Audit-Id: 6a60c1dd-68fa-422e-b647-28fbdba34d24
	I1006 02:34:37.365047 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:37.365069 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:37.365091 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:37.365113 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:37.365277 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:37.366011 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:37.366064 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:37.366095 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:37.366122 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:37.371689 2333562 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1006 02:34:37.371745 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:37.371767 2333562 round_trippers.go:580]     Audit-Id: 35d9b646-7d23-4082-b333-531802a5b219
	I1006 02:34:37.371789 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:37.371815 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:37.371837 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:37.371870 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:37.371891 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:37 GMT
	I1006 02:34:37.373355 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:37.498766 2333562 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1006 02:34:37.505632 2333562 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1006 02:34:37.514511 2333562 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1006 02:34:37.525677 2333562 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1006 02:34:37.535664 2333562 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1006 02:34:37.547956 2333562 command_runner.go:130] > pod/storage-provisioner created
	I1006 02:34:37.552959 2333562 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1006 02:34:37.553113 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1006 02:34:37.553126 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:37.553135 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:37.553142 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:37.556220 2333562 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1006 02:34:37.556285 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:37.556308 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:37 GMT
	I1006 02:34:37.556329 2333562 round_trippers.go:580]     Audit-Id: 4562f81d-8a31-421f-8e5a-99c39ff28e49
	I1006 02:34:37.556365 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:37.556391 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:37.556413 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:37.556448 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:37.556480 2333562 round_trippers.go:580]     Content-Length: 1273
	I1006 02:34:37.556772 2333562 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"316461e8-dba1-4dab-a736-eb43e61833b9","resourceVersion":"381","creationTimestamp":"2023-10-06T02:34:37Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-06T02:34:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1006 02:34:37.557178 2333562 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"316461e8-dba1-4dab-a736-eb43e61833b9","resourceVersion":"381","creationTimestamp":"2023-10-06T02:34:37Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-06T02:34:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1006 02:34:37.557225 2333562 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1006 02:34:37.557239 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:37.557250 2333562 round_trippers.go:473]     Content-Type: application/json
	I1006 02:34:37.557260 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:37.557272 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:37.563436 2333562 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1006 02:34:37.563456 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:37.563465 2333562 round_trippers.go:580]     Content-Length: 1220
	I1006 02:34:37.563472 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:37 GMT
	I1006 02:34:37.563478 2333562 round_trippers.go:580]     Audit-Id: 058766c0-ebf3-423d-b5eb-4f2032744b4e
	I1006 02:34:37.563484 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:37.563492 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:37.563498 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:37.563507 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:37.563592 2333562 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"316461e8-dba1-4dab-a736-eb43e61833b9","resourceVersion":"381","creationTimestamp":"2023-10-06T02:34:37Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-06T02:34:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1006 02:34:37.571337 2333562 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1006 02:34:37.573644 2333562 addons.go:502] enable addons completed in 1.131996922s: enabled=[storage-provisioner default-storageclass]
	I1006 02:34:37.874640 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:37.874664 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:37.874681 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:37.874693 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:37.877443 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:37.877470 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:37.877478 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:37 GMT
	I1006 02:34:37.877493 2333562 round_trippers.go:580]     Audit-Id: 15191e93-572d-478e-b7cf-d042c6f273bb
	I1006 02:34:37.877500 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:37.877512 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:37.877519 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:37.877525 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:37.877778 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:38.374216 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:38.374282 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:38.374307 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:38.374330 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:38.377281 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:38.377356 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:38.377380 2333562 round_trippers.go:580]     Audit-Id: ef9120a2-f4a9-42ed-b403-da7d7c241497
	I1006 02:34:38.377400 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:38.377421 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:38.377455 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:38.377475 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:38.377495 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:38 GMT
	I1006 02:34:38.378053 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:38.874040 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:38.874064 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:38.874075 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:38.874083 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:38.876735 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:38.876792 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:38.876814 2333562 round_trippers.go:580]     Audit-Id: 190cad59-c2fe-462f-a35d-584358055835
	I1006 02:34:38.876836 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:38.876870 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:38.876893 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:38.876913 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:38.876935 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:38 GMT
	I1006 02:34:38.877093 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:39.374695 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:39.374718 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:39.374733 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:39.374740 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:39.377234 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:39.377253 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:39.377262 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:39.377268 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:39.377275 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:39.377281 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:39 GMT
	I1006 02:34:39.377287 2333562 round_trippers.go:580]     Audit-Id: 1aee92f2-a526-4dc4-b2fb-5f3d908ddc5b
	I1006 02:34:39.377293 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:39.377440 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:39.377842 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:34:39.874655 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:39.874679 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:39.874689 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:39.874696 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:39.877288 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:39.877310 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:39.877319 2333562 round_trippers.go:580]     Audit-Id: 6e64b65c-c010-4e87-a249-77e6fa753b75
	I1006 02:34:39.877325 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:39.877331 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:39.877338 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:39.877344 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:39.877351 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:39 GMT
	I1006 02:34:39.877496 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:40.374310 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:40.374335 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:40.374345 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:40.374353 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:40.377042 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:40.377066 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:40.377075 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:40 GMT
	I1006 02:34:40.377082 2333562 round_trippers.go:580]     Audit-Id: f637bee6-584b-44bf-9190-b30ade89fee7
	I1006 02:34:40.377104 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:40.377116 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:40.377123 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:40.377136 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:40.377359 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:40.874008 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:40.874033 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:40.874044 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:40.874051 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:40.876858 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:40.876884 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:40.876894 2333562 round_trippers.go:580]     Audit-Id: c12cdd7a-eba7-416b-8bce-450b9964b051
	I1006 02:34:40.876900 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:40.876909 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:40.876915 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:40.876922 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:40.876929 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:40 GMT
	I1006 02:34:40.877050 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:41.374006 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:41.374031 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:41.374042 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:41.374049 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:41.376744 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:41.376764 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:41.376772 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:41.376779 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:41.376785 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:41.376792 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:41.376798 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:41 GMT
	I1006 02:34:41.376805 2333562 round_trippers.go:580]     Audit-Id: a1230a7c-a348-4f3a-a4d3-2ea376ed6405
	I1006 02:34:41.376911 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:41.874538 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:41.874563 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:41.874572 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:41.874579 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:41.876993 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:41.877017 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:41.877025 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:41.877032 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:41.877038 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:41.877048 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:41.877054 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:41 GMT
	I1006 02:34:41.877060 2333562 round_trippers.go:580]     Audit-Id: 74c89712-1631-4681-a8dd-991e3dc20cdf
	I1006 02:34:41.877203 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:41.877610 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:34:42.374254 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:42.374279 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:42.374289 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:42.374297 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:42.376882 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:42.376907 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:42.376916 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:42 GMT
	I1006 02:34:42.376923 2333562 round_trippers.go:580]     Audit-Id: 65088ce6-3015-4901-b95e-e8194c6297c4
	I1006 02:34:42.376929 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:42.376935 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:42.376941 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:42.376951 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:42.377231 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:42.874030 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:42.874053 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:42.874062 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:42.874070 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:42.876604 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:42.876623 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:42.876631 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:42.876638 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:42 GMT
	I1006 02:34:42.876644 2333562 round_trippers.go:580]     Audit-Id: c6cab776-6e2b-423c-9bec-fd6ef908c2ab
	I1006 02:34:42.876650 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:42.876657 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:42.876663 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:42.876817 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:43.374020 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:43.374045 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:43.374063 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:43.374071 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:43.376685 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:43.376710 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:43.376719 2333562 round_trippers.go:580]     Audit-Id: 85ce89dc-3a28-43c2-9e25-7133f4b3f274
	I1006 02:34:43.376726 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:43.376732 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:43.376739 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:43.376745 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:43.376763 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:43 GMT
	I1006 02:34:43.376917 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:43.874043 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:43.874073 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:43.874084 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:43.874092 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:43.876752 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:43.876781 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:43.876792 2333562 round_trippers.go:580]     Audit-Id: 1a85eebd-92ba-49a9-9605-b9e0c844e5a1
	I1006 02:34:43.876798 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:43.876809 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:43.876834 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:43.876846 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:43.876853 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:43 GMT
	I1006 02:34:43.877113 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:43.877633 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:34:44.374093 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:44.374116 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:44.374125 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:44.374132 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:44.376593 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:44.376618 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:44.376626 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:44.376634 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:44 GMT
	I1006 02:34:44.376641 2333562 round_trippers.go:580]     Audit-Id: edecd7a4-094e-4c66-9a74-6ca8e6e44074
	I1006 02:34:44.376647 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:44.376654 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:44.376660 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:44.376792 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:44.874950 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:44.874978 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:44.874989 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:44.874997 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:44.877679 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:44.877707 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:44.877717 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:44.877724 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:44.877730 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:44 GMT
	I1006 02:34:44.877737 2333562 round_trippers.go:580]     Audit-Id: d192a3ce-6cc5-4031-9dd5-fe13a1b7c1ee
	I1006 02:34:44.877743 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:44.877758 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:44.877910 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:45.374027 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:45.374052 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:45.374062 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:45.374069 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:45.376681 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:45.376703 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:45.376714 2333562 round_trippers.go:580]     Audit-Id: cf1e465b-e688-4140-af91-05689507beb3
	I1006 02:34:45.376720 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:45.376727 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:45.376733 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:45.376748 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:45.376757 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:45 GMT
	I1006 02:34:45.376913 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:45.874000 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:45.874025 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:45.874035 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:45.874062 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:45.876650 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:45.876674 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:45.876682 2333562 round_trippers.go:580]     Audit-Id: 34a23b4b-1a24-44d1-bed1-ba4c60a1415d
	I1006 02:34:45.876689 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:45.876695 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:45.876702 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:45.876708 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:45.876715 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:45 GMT
	I1006 02:34:45.876875 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:46.374643 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:46.374676 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:46.374687 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:46.374695 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:46.377244 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:46.377267 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:46.377275 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:46.377282 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:46.377288 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:46.377295 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:46 GMT
	I1006 02:34:46.377301 2333562 round_trippers.go:580]     Audit-Id: 1459b779-02fc-45da-a647-268cc44055dd
	I1006 02:34:46.377307 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:46.377470 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:46.377891 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:34:46.874059 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:46.874088 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:46.874098 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:46.874105 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:46.876883 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:46.876904 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:46.876912 2333562 round_trippers.go:580]     Audit-Id: 9546491a-746f-4cd3-910a-0c5a78b4e6eb
	I1006 02:34:46.876918 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:46.876924 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:46.876931 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:46.876937 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:46.876943 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:46 GMT
	I1006 02:34:46.877161 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:47.373938 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:47.373962 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:47.373971 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:47.373978 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:47.376722 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:47.376761 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:47.376778 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:47.376785 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:47 GMT
	I1006 02:34:47.376792 2333562 round_trippers.go:580]     Audit-Id: ba4438e8-0fab-486b-8c69-5c9c82e9616b
	I1006 02:34:47.376804 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:47.376810 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:47.376817 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:47.376946 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:47.874525 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:47.874549 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:47.874559 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:47.874566 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:47.876986 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:47.877010 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:47.877018 2333562 round_trippers.go:580]     Audit-Id: 8d9ba4d9-1a49-4885-afdb-0c84cdf2bf4f
	I1006 02:34:47.877025 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:47.877037 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:47.877043 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:47.877050 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:47.877062 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:47 GMT
	I1006 02:34:47.877395 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:48.374273 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:48.374299 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:48.374314 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:48.374322 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:48.377068 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:48.377092 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:48.377100 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:48.377107 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:48.377114 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:48.377120 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:48.377126 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:48 GMT
	I1006 02:34:48.377133 2333562 round_trippers.go:580]     Audit-Id: 501d06da-be15-40ef-82ff-fe5211dbb322
	I1006 02:34:48.377319 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:48.873948 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:48.873974 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:48.873984 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:48.873992 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:48.876596 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:48.876622 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:48.876630 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:48.876636 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:48.876642 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:48.876648 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:48 GMT
	I1006 02:34:48.876655 2333562 round_trippers.go:580]     Audit-Id: f78b3805-fff8-4082-86ab-b71f14782abf
	I1006 02:34:48.876661 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:48.876795 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:48.877221 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:34:49.374978 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:49.375003 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:49.375014 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:49.375022 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:49.377671 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:49.377691 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:49.377699 2333562 round_trippers.go:580]     Audit-Id: 3ad93ce1-20d3-4167-9523-27fdc0560ce2
	I1006 02:34:49.377705 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:49.377711 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:49.377718 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:49.377724 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:49.377730 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:49 GMT
	I1006 02:34:49.377858 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:49.874875 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:49.874901 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:49.874912 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:49.874921 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:49.877402 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:49.877423 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:49.877431 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:49.877437 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:49.877443 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:49 GMT
	I1006 02:34:49.877449 2333562 round_trippers.go:580]     Audit-Id: f4a00679-e346-458f-b16f-6246f373d9a3
	I1006 02:34:49.877455 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:49.877462 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:49.877603 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:50.374882 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:50.374910 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:50.374920 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:50.374927 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:50.377455 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:50.377480 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:50.377488 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:50.377495 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:50.377501 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:50.377508 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:50.377515 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:50 GMT
	I1006 02:34:50.377525 2333562 round_trippers.go:580]     Audit-Id: dce5f3db-c3cc-4b5b-a2e6-a7d9989d1246
	I1006 02:34:50.377674 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:50.874799 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:50.874825 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:50.874835 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:50.874843 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:50.877505 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:50.877527 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:50.877536 2333562 round_trippers.go:580]     Audit-Id: 07dd5ac5-ca3b-4814-a7cf-5b598db2e708
	I1006 02:34:50.877542 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:50.877549 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:50.877555 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:50.877561 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:50.877568 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:50 GMT
	I1006 02:34:50.877753 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:50.878171 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:34:51.374148 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:51.374171 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:51.374183 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:51.374190 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:51.376655 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:51.376683 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:51.376693 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:51.376699 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:51.376706 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:51 GMT
	I1006 02:34:51.376712 2333562 round_trippers.go:580]     Audit-Id: 8b14fb6a-1599-4b39-a7c1-3e77dc892f78
	I1006 02:34:51.376719 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:51.376730 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:51.376835 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:51.874937 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:51.874959 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:51.874968 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:51.874975 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:51.877400 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:51.877425 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:51.877433 2333562 round_trippers.go:580]     Audit-Id: 2b300c3f-7af0-4f5e-8f0a-e10d77968df4
	I1006 02:34:51.877440 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:51.877446 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:51.877453 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:51.877459 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:51.877466 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:51 GMT
	I1006 02:34:51.877606 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:52.374796 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:52.374824 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:52.374834 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:52.374841 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:52.377493 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:52.377513 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:52.377521 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:52.377528 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:52.377534 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:52.377541 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:52.377547 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:52 GMT
	I1006 02:34:52.377554 2333562 round_trippers.go:580]     Audit-Id: c0f15d07-5bad-488d-8b26-f0ec3107b878
	I1006 02:34:52.377676 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:52.874821 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:52.874844 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:52.874855 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:52.874862 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:52.877260 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:52.877280 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:52.877288 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:52.877294 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:52 GMT
	I1006 02:34:52.877301 2333562 round_trippers.go:580]     Audit-Id: ea326878-4d2b-43a2-a09c-7c92461cea3b
	I1006 02:34:52.877307 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:52.877318 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:52.877327 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:52.877492 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:53.374674 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:53.374702 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:53.374717 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:53.374725 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:53.377259 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:53.377282 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:53.377290 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:53.377297 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:53.377303 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:53.377310 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:53 GMT
	I1006 02:34:53.377316 2333562 round_trippers.go:580]     Audit-Id: 777124a4-fdb2-416c-8b09-a8815a15ae39
	I1006 02:34:53.377322 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:53.377446 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:53.377836 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:34:53.874305 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:53.874329 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:53.874338 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:53.874345 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:53.876763 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:53.876784 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:53.876792 2333562 round_trippers.go:580]     Audit-Id: 75e3236b-1abf-41a8-9f0d-a96ba11fafdb
	I1006 02:34:53.876799 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:53.876805 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:53.876812 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:53.876818 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:53.876824 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:53 GMT
	I1006 02:34:53.876976 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:54.374949 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:54.374984 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:54.374995 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:54.375024 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:54.377606 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:54.377633 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:54.377643 2333562 round_trippers.go:580]     Audit-Id: 9af6787a-6134-4d8a-abd2-adaf556cf9eb
	I1006 02:34:54.377650 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:54.377656 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:54.377663 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:54.377672 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:54.377688 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:54 GMT
	I1006 02:34:54.378098 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:54.874763 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:54.874790 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:54.874801 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:54.874809 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:54.877376 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:54.877401 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:54.877410 2333562 round_trippers.go:580]     Audit-Id: 62ed4a3f-9a44-43aa-94e1-bf1049fb2e77
	I1006 02:34:54.877417 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:54.877425 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:54.877431 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:54.877438 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:54.877444 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:54 GMT
	I1006 02:34:54.877707 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:55.374744 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:55.374770 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:55.374781 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:55.374789 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:55.377243 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:55.377263 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:55.377271 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:55.377278 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:55 GMT
	I1006 02:34:55.377284 2333562 round_trippers.go:580]     Audit-Id: ebd9a8b7-b836-4ac7-83fd-5d6768afabbe
	I1006 02:34:55.377290 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:55.377296 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:55.377303 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:55.377473 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:55.377930 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:34:55.874223 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:55.874262 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:55.874271 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:55.874278 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:55.876777 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:55.876799 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:55.876807 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:55 GMT
	I1006 02:34:55.876813 2333562 round_trippers.go:580]     Audit-Id: 6c034c99-84b2-4f3a-ba1d-f5069a27b14e
	I1006 02:34:55.876820 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:55.876829 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:55.876839 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:55.876850 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:55.877191 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:56.374901 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:56.374928 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:56.374938 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:56.374945 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:56.377672 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:56.377691 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:56.377700 2333562 round_trippers.go:580]     Audit-Id: 26cd3e09-ccc6-400f-aeae-99b6915e1a08
	I1006 02:34:56.377706 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:56.377713 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:56.377719 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:56.377725 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:56.377733 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:56 GMT
	I1006 02:34:56.377839 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:56.874110 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:56.874138 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:56.874148 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:56.874156 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:56.877601 2333562 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1006 02:34:56.877627 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:56.877636 2333562 round_trippers.go:580]     Audit-Id: 5e846fad-12af-4c99-99db-7aa7d5b1bf54
	I1006 02:34:56.877643 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:56.877649 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:56.877656 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:56.877663 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:56.877669 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:56 GMT
	I1006 02:34:56.877814 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:57.374943 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:57.374965 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:57.374974 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:57.374981 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:57.377500 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:57.377523 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:57.377531 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:57.377542 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:57 GMT
	I1006 02:34:57.377548 2333562 round_trippers.go:580]     Audit-Id: 29f91a4e-7bb2-490c-993e-1952b860084a
	I1006 02:34:57.377560 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:57.377571 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:57.377581 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:57.377980 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:57.378382 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:34:57.874077 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:57.874099 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:57.874110 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:57.874118 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:57.876802 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:57.876866 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:57.876913 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:57.876947 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:57.876973 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:57.876995 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:57 GMT
	I1006 02:34:57.877016 2333562 round_trippers.go:580]     Audit-Id: 8ec9de05-4ea6-4f48-9202-eb859d1c68ee
	I1006 02:34:57.877045 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:57.877190 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:58.374628 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:58.374651 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:58.374661 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:58.374669 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:58.377252 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:58.377276 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:58.377284 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:58 GMT
	I1006 02:34:58.377291 2333562 round_trippers.go:580]     Audit-Id: e24839a5-0b41-4d3e-89eb-e1f5930b2c9f
	I1006 02:34:58.377297 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:58.377303 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:58.377309 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:58.377316 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:58.377418 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:58.874588 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:58.874613 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:58.874623 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:58.874630 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:58.877154 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:58.877181 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:58.877190 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:58.877197 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:58.877203 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:58.877219 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:58 GMT
	I1006 02:34:58.877226 2333562 round_trippers.go:580]     Audit-Id: 138c7e68-72a0-4ba9-a8b4-ddbb27bd7282
	I1006 02:34:58.877233 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:58.877376 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:59.374385 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:59.374407 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:59.374417 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:59.374425 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:59.376949 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:59.376973 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:59.376981 2333562 round_trippers.go:580]     Audit-Id: 7f36f687-1808-4a2b-9efb-e8b079943e41
	I1006 02:34:59.376988 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:59.376994 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:59.377000 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:59.377006 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:59.377013 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:59 GMT
	I1006 02:34:59.377409 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:59.874045 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:34:59.874073 2333562 round_trippers.go:469] Request Headers:
	I1006 02:34:59.874084 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:34:59.874091 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:34:59.876755 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:34:59.876781 2333562 round_trippers.go:577] Response Headers:
	I1006 02:34:59.876790 2333562 round_trippers.go:580]     Audit-Id: 6aec69fa-5d47-4f57-9932-5ad3339657c7
	I1006 02:34:59.876797 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:34:59.876803 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:34:59.876809 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:34:59.876816 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:34:59.876823 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:34:59 GMT
	I1006 02:34:59.876978 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:34:59.877405 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:35:00.374260 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:00.374287 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:00.374298 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:00.374306 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:00.377514 2333562 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1006 02:35:00.377538 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:00.377547 2333562 round_trippers.go:580]     Audit-Id: b48f33de-8ef5-49de-970e-a1b05daa1630
	I1006 02:35:00.377553 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:00.377560 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:00.377568 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:00.377574 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:00.377580 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:00 GMT
	I1006 02:35:00.377799 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:00.874273 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:00.874300 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:00.874314 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:00.874321 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:00.876801 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:00.876822 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:00.876836 2333562 round_trippers.go:580]     Audit-Id: 9c04b3f3-7e3b-4ce2-b41d-d7a1bc6787ec
	I1006 02:35:00.876843 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:00.876849 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:00.876857 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:00.876864 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:00.876870 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:00 GMT
	I1006 02:35:00.877028 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:01.374429 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:01.374456 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:01.374467 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:01.374475 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:01.377317 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:01.377341 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:01.377349 2333562 round_trippers.go:580]     Audit-Id: 11df8dbf-95f3-4fbe-a86d-f9456da62a54
	I1006 02:35:01.377355 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:01.377362 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:01.377368 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:01.377375 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:01.377381 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:01 GMT
	I1006 02:35:01.377500 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:01.874659 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:01.874685 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:01.874695 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:01.874703 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:01.877405 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:01.877432 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:01.877441 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:01.877447 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:01.877454 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:01.877460 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:01.877467 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:01 GMT
	I1006 02:35:01.877473 2333562 round_trippers.go:580]     Audit-Id: 36f854de-7eff-4777-94a9-ea3ae0bfc43b
	I1006 02:35:01.877596 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:01.877999 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:35:02.374680 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:02.374708 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:02.374718 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:02.374726 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:02.377333 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:02.377359 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:02.377368 2333562 round_trippers.go:580]     Audit-Id: d6f360fc-8175-4358-9788-09c2acc5e039
	I1006 02:35:02.377374 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:02.377381 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:02.377387 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:02.377394 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:02.377400 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:02 GMT
	I1006 02:35:02.377505 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:02.874628 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:02.874654 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:02.874663 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:02.874671 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:02.877298 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:02.877323 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:02.877331 2333562 round_trippers.go:580]     Audit-Id: 1eb70cdf-6b2e-449b-b5c3-623a4b5c998b
	I1006 02:35:02.877337 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:02.877344 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:02.877350 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:02.877356 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:02.877363 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:02 GMT
	I1006 02:35:02.877473 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:03.374592 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:03.374618 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:03.374628 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:03.374641 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:03.377156 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:03.377184 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:03.377192 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:03.377199 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:03.377205 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:03.377212 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:03 GMT
	I1006 02:35:03.377219 2333562 round_trippers.go:580]     Audit-Id: fb990d50-8aa5-4248-8ed2-bdc028c8b78b
	I1006 02:35:03.377231 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:03.377338 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:03.873981 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:03.874010 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:03.874021 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:03.874028 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:03.876717 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:03.876749 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:03.876757 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:03.876764 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:03.876770 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:03.876776 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:03 GMT
	I1006 02:35:03.876783 2333562 round_trippers.go:580]     Audit-Id: 5cd9c59e-cc87-4d62-a540-1501072d2f98
	I1006 02:35:03.876789 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:03.876956 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:04.373999 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:04.374026 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:04.374036 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:04.374044 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:04.376650 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:04.376669 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:04.376677 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:04.376685 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:04 GMT
	I1006 02:35:04.376691 2333562 round_trippers.go:580]     Audit-Id: dd4c61be-1e2f-4f99-8e21-de01883d8ac1
	I1006 02:35:04.376697 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:04.376703 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:04.376709 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:04.376816 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:04.377209 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:35:04.874889 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:04.874919 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:04.874928 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:04.874935 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:04.877393 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:04.877416 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:04.877424 2333562 round_trippers.go:580]     Audit-Id: b8245d5b-90bb-4ba3-81ae-1bbc06a98a38
	I1006 02:35:04.877431 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:04.877437 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:04.877443 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:04.877449 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:04.877455 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:04 GMT
	I1006 02:35:04.877825 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:05.373993 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:05.374019 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:05.374029 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:05.374037 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:05.376562 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:05.376587 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:05.376595 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:05 GMT
	I1006 02:35:05.376604 2333562 round_trippers.go:580]     Audit-Id: dd6b2a33-2d23-47f8-816a-f20a554f59a0
	I1006 02:35:05.376612 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:05.376618 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:05.376625 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:05.376634 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:05.376988 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:05.874692 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:05.874725 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:05.874735 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:05.874742 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:05.877344 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:05.877363 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:05.877372 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:05.877380 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:05.877386 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:05.877392 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:05.877399 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:05 GMT
	I1006 02:35:05.877405 2333562 round_trippers.go:580]     Audit-Id: 94002944-e312-4987-b1fa-117367e65814
	I1006 02:35:05.877575 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:06.374684 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:06.374709 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:06.374719 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:06.374727 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:06.377550 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:06.377578 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:06.377588 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:06.377594 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:06.377601 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:06.377607 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:06.377617 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:06 GMT
	I1006 02:35:06.377628 2333562 round_trippers.go:580]     Audit-Id: 77d43f9b-2cd1-4279-a819-6d775762da42
	I1006 02:35:06.377732 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:06.378135 2333562 node_ready.go:58] node "multinode-951739" has status "Ready":"False"
	I1006 02:35:06.873996 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:06.874021 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:06.874030 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:06.874037 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:06.876695 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:06.876756 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:06.876772 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:06.876779 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:06.876786 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:06.876792 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:06 GMT
	I1006 02:35:06.876799 2333562 round_trippers.go:580]     Audit-Id: 15ae2205-77df-44b1-a18b-7e544e745a7c
	I1006 02:35:06.876805 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:06.876926 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:07.374314 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:07.374340 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:07.374350 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:07.374358 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:07.378126 2333562 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1006 02:35:07.378153 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:07.378162 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:07 GMT
	I1006 02:35:07.378169 2333562 round_trippers.go:580]     Audit-Id: ddf49a5c-1b72-462c-b419-177ef3533bda
	I1006 02:35:07.378176 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:07.378182 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:07.378189 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:07.378197 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:07.378339 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:07.874002 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:07.874029 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:07.874038 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:07.874045 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:07.876583 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:07.876603 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:07.876611 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:07.876620 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:07.876627 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:07.876633 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:07.876639 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:07 GMT
	I1006 02:35:07.876650 2333562 round_trippers.go:580]     Audit-Id: cf7950a9-9649-44a1-a2b3-5e67f1460c41
	I1006 02:35:07.876758 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:08.374023 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:08.374046 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:08.374055 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:08.374062 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:08.376637 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:08.376660 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:08.376668 2333562 round_trippers.go:580]     Audit-Id: 6eb4cfcc-2a2c-4b51-bddc-9d823a9a31dc
	I1006 02:35:08.376676 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:08.376682 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:08.376689 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:08.376695 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:08.376703 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:08 GMT
	I1006 02:35:08.377187 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"345","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1006 02:35:08.873992 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:08.874013 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:08.874022 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:08.874029 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:08.879870 2333562 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1006 02:35:08.879890 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:08.879898 2333562 round_trippers.go:580]     Audit-Id: e5486458-dfc6-4a3a-b410-a288c99d3799
	I1006 02:35:08.879904 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:08.879911 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:08.879917 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:08.879923 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:08.879929 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:08 GMT
	I1006 02:35:08.880069 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:35:08.880462 2333562 node_ready.go:49] node "multinode-951739" has status "Ready":"True"
	I1006 02:35:08.880475 2333562 node_ready.go:38] duration metric: took 31.524023329s waiting for node "multinode-951739" to be "Ready" ...
	I1006 02:35:08.880486 2333562 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:35:08.880578 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1006 02:35:08.880584 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:08.880592 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:08.880598 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:08.888905 2333562 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1006 02:35:08.888926 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:08.888935 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:08.888941 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:08 GMT
	I1006 02:35:08.888948 2333562 round_trippers.go:580]     Audit-Id: ea85c6a1-5a4c-4a7c-8d4b-9a573a2e5014
	I1006 02:35:08.888954 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:08.888960 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:08.888966 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:08.889407 2333562 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"coredns-5dd5756b68-tswm4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"82def565-623c-4885-b2a3-87c5302c1841","resourceVersion":"434","creationTimestamp":"2023-10-06T02:34:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"61adee40-8c8c-4b55-82f7-ad79fd22292d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61adee40-8c8c-4b55-82f7-ad79fd22292d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1006 02:35:08.893431 2333562 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tswm4" in "kube-system" namespace to be "Ready" ...
	I1006 02:35:08.893582 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-tswm4
	I1006 02:35:08.893607 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:08.893629 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:08.893651 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:08.897930 2333562 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1006 02:35:08.897988 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:08.898009 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:08.898032 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:08 GMT
	I1006 02:35:08.898067 2333562 round_trippers.go:580]     Audit-Id: 3827a745-ddfc-4bc0-92d1-9588733cdf6c
	I1006 02:35:08.898091 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:08.898111 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:08.898133 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:08.898255 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-tswm4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"82def565-623c-4885-b2a3-87c5302c1841","resourceVersion":"434","creationTimestamp":"2023-10-06T02:34:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"61adee40-8c8c-4b55-82f7-ad79fd22292d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61adee40-8c8c-4b55-82f7-ad79fd22292d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1006 02:35:08.898807 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:08.898843 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:08.898865 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:08.898888 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:08.901526 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:08.901581 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:08.901602 2333562 round_trippers.go:580]     Audit-Id: 406c89ee-2e5d-4088-8bdb-d0f33eeec391
	I1006 02:35:08.901624 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:08.901655 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:08.901679 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:08.901698 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:08.901720 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:08 GMT
	I1006 02:35:08.901853 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:35:08.902315 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-tswm4
	I1006 02:35:08.902346 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:08.902367 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:08.902389 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:08.905102 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:08.905149 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:08.905170 2333562 round_trippers.go:580]     Audit-Id: b4bb408f-0ea6-4e18-acec-f9068111a13c
	I1006 02:35:08.905194 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:08.905228 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:08.905249 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:08.905274 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:08.905286 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:08 GMT
	I1006 02:35:08.905437 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-tswm4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"82def565-623c-4885-b2a3-87c5302c1841","resourceVersion":"434","creationTimestamp":"2023-10-06T02:34:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"61adee40-8c8c-4b55-82f7-ad79fd22292d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61adee40-8c8c-4b55-82f7-ad79fd22292d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1006 02:35:08.905980 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:08.905998 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:08.906007 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:08.906014 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:08.908444 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:08.908464 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:08.908473 2333562 round_trippers.go:580]     Audit-Id: 48aae63f-2508-4f84-a4b2-9e26ad44650b
	I1006 02:35:08.908479 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:08.908486 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:08.908492 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:08.908498 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:08.908505 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:08 GMT
	I1006 02:35:08.908828 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:35:09.409989 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-tswm4
	I1006 02:35:09.410022 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:09.410036 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:09.410044 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:09.412769 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:09.412840 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:09.412911 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:09.412939 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:09.412959 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:09.412981 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:09 GMT
	I1006 02:35:09.413014 2333562 round_trippers.go:580]     Audit-Id: 55581b7e-74be-410c-a576-9feaa4fa4ecc
	I1006 02:35:09.413066 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:09.413190 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-tswm4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"82def565-623c-4885-b2a3-87c5302c1841","resourceVersion":"434","creationTimestamp":"2023-10-06T02:34:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"61adee40-8c8c-4b55-82f7-ad79fd22292d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61adee40-8c8c-4b55-82f7-ad79fd22292d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1006 02:35:09.413822 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:09.413839 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:09.413848 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:09.413855 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:09.416353 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:09.416376 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:09.416384 2333562 round_trippers.go:580]     Audit-Id: c6d3e216-b08a-429e-b336-0d6072a665f1
	I1006 02:35:09.416390 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:09.416396 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:09.416402 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:09.416409 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:09.416416 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:09 GMT
	I1006 02:35:09.416532 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:35:09.909639 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-tswm4
	I1006 02:35:09.909667 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:09.909677 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:09.909684 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:09.912332 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:09.912376 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:09.912384 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:09.912392 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:09 GMT
	I1006 02:35:09.912398 2333562 round_trippers.go:580]     Audit-Id: 706c9d0a-9ce1-4bbf-bdbb-5b752c04defb
	I1006 02:35:09.912405 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:09.912418 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:09.912428 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:09.912543 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-tswm4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"82def565-623c-4885-b2a3-87c5302c1841","resourceVersion":"445","creationTimestamp":"2023-10-06T02:34:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"61adee40-8c8c-4b55-82f7-ad79fd22292d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61adee40-8c8c-4b55-82f7-ad79fd22292d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1006 02:35:09.913096 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:09.913111 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:09.913120 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:09.913127 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:09.915537 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:09.915561 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:09.915569 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:09.915576 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:09.915582 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:09.915589 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:09.915595 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:09 GMT
	I1006 02:35:09.915602 2333562 round_trippers.go:580]     Audit-Id: fbd3a50c-2f10-4392-a512-6942988e6bf3
	I1006 02:35:09.915721 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:35:09.916117 2333562 pod_ready.go:92] pod "coredns-5dd5756b68-tswm4" in "kube-system" namespace has status "Ready":"True"
	I1006 02:35:09.916148 2333562 pod_ready.go:81] duration metric: took 1.022651911s waiting for pod "coredns-5dd5756b68-tswm4" in "kube-system" namespace to be "Ready" ...
	I1006 02:35:09.916161 2333562 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:35:09.916224 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-951739
	I1006 02:35:09.916236 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:09.916244 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:09.916251 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:09.918609 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:09.918640 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:09.918648 2333562 round_trippers.go:580]     Audit-Id: e09b1542-54cd-4bef-acc4-b4905fb443b7
	I1006 02:35:09.918654 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:09.918660 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:09.918667 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:09.918675 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:09.918685 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:09 GMT
	I1006 02:35:09.918782 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-951739","namespace":"kube-system","uid":"bef22c05-be2f-4ea4-822d-2eba636c713e","resourceVersion":"418","creationTimestamp":"2023-10-06T02:34:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"af5e192498cd67a5eafe7312fdcb281d","kubernetes.io/config.mirror":"af5e192498cd67a5eafe7312fdcb281d","kubernetes.io/config.seen":"2023-10-06T02:34:24.422905048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1006 02:35:09.919265 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:09.919280 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:09.919288 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:09.919298 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:09.921666 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:09.921724 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:09.921748 2333562 round_trippers.go:580]     Audit-Id: fcac439f-04af-457b-a091-6f1b7c907ce9
	I1006 02:35:09.921771 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:09.921805 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:09.921835 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:09.921858 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:09.921879 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:09 GMT
	I1006 02:35:09.922053 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:35:09.922466 2333562 pod_ready.go:92] pod "etcd-multinode-951739" in "kube-system" namespace has status "Ready":"True"
	I1006 02:35:09.922485 2333562 pod_ready.go:81] duration metric: took 6.310994ms waiting for pod "etcd-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:35:09.922499 2333562 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:35:09.922554 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-951739
	I1006 02:35:09.922565 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:09.922579 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:09.922587 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:09.925635 2333562 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1006 02:35:09.925659 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:09.925668 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:09 GMT
	I1006 02:35:09.925674 2333562 round_trippers.go:580]     Audit-Id: 048746e2-3ae5-43da-8525-0f9ddbe70691
	I1006 02:35:09.925681 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:09.925687 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:09.925693 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:09.925700 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:09.926044 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-951739","namespace":"kube-system","uid":"7129e4a8-1667-4441-b00d-5e0f59264803","resourceVersion":"357","creationTimestamp":"2023-10-06T02:34:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"3f4141b08fc9414f47ccfd58153cd186","kubernetes.io/config.mirror":"3f4141b08fc9414f47ccfd58153cd186","kubernetes.io/config.seen":"2023-10-06T02:34:24.422911111Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1006 02:35:09.926645 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:09.926660 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:09.926669 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:09.926676 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:09.929626 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:09.929655 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:09.929664 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:09.929671 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:09.929677 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:09.929684 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:09 GMT
	I1006 02:35:09.929691 2333562 round_trippers.go:580]     Audit-Id: 9d20a8d9-ad74-4a54-a246-e801702f9a8b
	I1006 02:35:09.929697 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:09.930092 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:35:09.930504 2333562 pod_ready.go:92] pod "kube-apiserver-multinode-951739" in "kube-system" namespace has status "Ready":"True"
	I1006 02:35:09.930522 2333562 pod_ready.go:81] duration metric: took 8.015187ms waiting for pod "kube-apiserver-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:35:09.930535 2333562 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:35:09.930598 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-951739
	I1006 02:35:09.930607 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:09.930615 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:09.930621 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:09.933303 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:09.933328 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:09.933337 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:09.933344 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:09.933351 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:09 GMT
	I1006 02:35:09.933357 2333562 round_trippers.go:580]     Audit-Id: 86ada856-d634-4d40-8a2e-cbebc30efa67
	I1006 02:35:09.933365 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:09.933374 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:09.933571 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-951739","namespace":"kube-system","uid":"8309b551-13a7-4115-a9a7-8e1f482fbdf4","resourceVersion":"419","creationTimestamp":"2023-10-06T02:34:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"39065bcc46cf04349a0afe0158652bd4","kubernetes.io/config.mirror":"39065bcc46cf04349a0afe0158652bd4","kubernetes.io/config.seen":"2023-10-06T02:34:24.422912465Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1006 02:35:10.074583 2333562 request.go:629] Waited for 140.458415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:10.074753 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:10.074779 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:10.074803 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:10.074830 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:10.077570 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:10.077592 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:10.077602 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:10.077609 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:10.077644 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:10.077659 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:10.077667 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:10 GMT
	I1006 02:35:10.077673 2333562 round_trippers.go:580]     Audit-Id: bf6e2acd-5ca5-47e6-8fea-241cfc22885f
	I1006 02:35:10.077796 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:35:10.078206 2333562 pod_ready.go:92] pod "kube-controller-manager-multinode-951739" in "kube-system" namespace has status "Ready":"True"
	I1006 02:35:10.078222 2333562 pod_ready.go:81] duration metric: took 147.68005ms waiting for pod "kube-controller-manager-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:35:10.078235 2333562 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lwrtj" in "kube-system" namespace to be "Ready" ...
	I1006 02:35:10.274590 2333562 request.go:629] Waited for 196.287386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwrtj
	I1006 02:35:10.274669 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwrtj
	I1006 02:35:10.274680 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:10.274689 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:10.274700 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:10.277391 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:10.277463 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:10.277501 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:10 GMT
	I1006 02:35:10.277529 2333562 round_trippers.go:580]     Audit-Id: 72d9ae51-acbf-4c6d-99a3-64e56a7971ce
	I1006 02:35:10.277552 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:10.277580 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:10.277589 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:10.277606 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:10.277766 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lwrtj","generateName":"kube-proxy-","namespace":"kube-system","uid":"a24c85d4-5722-49cd-bfd9-adc611cca199","resourceVersion":"414","creationTimestamp":"2023-10-06T02:34:36Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4ad46367-bdfb-4340-af54-8507ab3db445","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ad46367-bdfb-4340-af54-8507ab3db445\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1006 02:35:10.474562 2333562 request.go:629] Waited for 196.312608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:10.474617 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:10.474627 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:10.474636 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:10.474648 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:10.477237 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:10.477265 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:10.477274 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:10.477280 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:10.477287 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:10.477301 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:10.477313 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:10 GMT
	I1006 02:35:10.477320 2333562 round_trippers.go:580]     Audit-Id: c4de396c-7c46-45cc-90f2-6a6f01a8f10c
	I1006 02:35:10.477696 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:35:10.478125 2333562 pod_ready.go:92] pod "kube-proxy-lwrtj" in "kube-system" namespace has status "Ready":"True"
	I1006 02:35:10.478143 2333562 pod_ready.go:81] duration metric: took 399.901251ms waiting for pod "kube-proxy-lwrtj" in "kube-system" namespace to be "Ready" ...
	I1006 02:35:10.478154 2333562 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:35:10.674392 2333562 request.go:629] Waited for 196.174927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-951739
	I1006 02:35:10.674465 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-951739
	I1006 02:35:10.674475 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:10.674507 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:10.674518 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:10.676996 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:10.677020 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:10.677028 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:10.677035 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:10 GMT
	I1006 02:35:10.677041 2333562 round_trippers.go:580]     Audit-Id: 424918ed-e1e5-407f-a555-296d68a68bfe
	I1006 02:35:10.677047 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:10.677053 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:10.677060 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:10.677199 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-951739","namespace":"kube-system","uid":"72163d36-23a0-4b32-b6bb-8c79dc9145b6","resourceVersion":"347","creationTimestamp":"2023-10-06T02:34:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c53f7d1bf40c93d218c2763d6e42215d","kubernetes.io/config.mirror":"c53f7d1bf40c93d218c2763d6e42215d","kubernetes.io/config.seen":"2023-10-06T02:34:24.422913417Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1006 02:35:10.874965 2333562 request.go:629] Waited for 197.344076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:10.875068 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:35:10.875078 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:10.875087 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:10.875098 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:10.877910 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:10.877934 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:10.877944 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:10.877952 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:10.877975 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:10.877988 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:10.877995 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:10 GMT
	I1006 02:35:10.878004 2333562 round_trippers.go:580]     Audit-Id: 3ad9e892-7c95-45d3-bdc8-fdfcf9919e89
	I1006 02:35:10.878122 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:35:10.878564 2333562 pod_ready.go:92] pod "kube-scheduler-multinode-951739" in "kube-system" namespace has status "Ready":"True"
	I1006 02:35:10.878582 2333562 pod_ready.go:81] duration metric: took 400.419359ms waiting for pod "kube-scheduler-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:35:10.878595 2333562 pod_ready.go:38] duration metric: took 1.998085783s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:35:10.878614 2333562 api_server.go:52] waiting for apiserver process to appear ...
	I1006 02:35:10.878688 2333562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:35:10.891938 2333562 command_runner.go:130] > 1247
	I1006 02:35:10.893481 2333562 api_server.go:72] duration metric: took 34.195558435s to wait for apiserver process to appear ...
	I1006 02:35:10.893506 2333562 api_server.go:88] waiting for apiserver healthz status ...
	I1006 02:35:10.893524 2333562 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1006 02:35:10.902324 2333562 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1006 02:35:10.902393 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1006 02:35:10.902405 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:10.902414 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:10.902421 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:10.903777 2333562 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1006 02:35:10.903833 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:10.903855 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:10 GMT
	I1006 02:35:10.903878 2333562 round_trippers.go:580]     Audit-Id: 8a4a1c7f-b971-4cec-b47d-a0264550a98c
	I1006 02:35:10.903915 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:10.903940 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:10.903963 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:10.903998 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:10.904043 2333562 round_trippers.go:580]     Content-Length: 263
	I1006 02:35:10.904093 2333562 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1006 02:35:10.904210 2333562 api_server.go:141] control plane version: v1.28.2
	I1006 02:35:10.904230 2333562 api_server.go:131] duration metric: took 10.716592ms to wait for apiserver health ...
	I1006 02:35:10.904241 2333562 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 02:35:11.074629 2333562 request.go:629] Waited for 170.316389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1006 02:35:11.074741 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1006 02:35:11.074784 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:11.074802 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:11.074810 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:11.078540 2333562 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1006 02:35:11.078566 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:11.078575 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:11 GMT
	I1006 02:35:11.078582 2333562 round_trippers.go:580]     Audit-Id: 14175ffe-a746-4a7b-82a8-3711918f2690
	I1006 02:35:11.078588 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:11.078594 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:11.078601 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:11.078615 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:11.078973 2333562 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-tswm4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"82def565-623c-4885-b2a3-87c5302c1841","resourceVersion":"445","creationTimestamp":"2023-10-06T02:34:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"61adee40-8c8c-4b55-82f7-ad79fd22292d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61adee40-8c8c-4b55-82f7-ad79fd22292d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1006 02:35:11.081363 2333562 system_pods.go:59] 8 kube-system pods found
	I1006 02:35:11.081398 2333562 system_pods.go:61] "coredns-5dd5756b68-tswm4" [82def565-623c-4885-b2a3-87c5302c1841] Running
	I1006 02:35:11.081404 2333562 system_pods.go:61] "etcd-multinode-951739" [bef22c05-be2f-4ea4-822d-2eba636c713e] Running
	I1006 02:35:11.081409 2333562 system_pods.go:61] "kindnet-6r6sg" [7db99007-1339-4908-9602-17cc612ce27b] Running
	I1006 02:35:11.081416 2333562 system_pods.go:61] "kube-apiserver-multinode-951739" [7129e4a8-1667-4441-b00d-5e0f59264803] Running
	I1006 02:35:11.081423 2333562 system_pods.go:61] "kube-controller-manager-multinode-951739" [8309b551-13a7-4115-a9a7-8e1f482fbdf4] Running
	I1006 02:35:11.081427 2333562 system_pods.go:61] "kube-proxy-lwrtj" [a24c85d4-5722-49cd-bfd9-adc611cca199] Running
	I1006 02:35:11.081433 2333562 system_pods.go:61] "kube-scheduler-multinode-951739" [72163d36-23a0-4b32-b6bb-8c79dc9145b6] Running
	I1006 02:35:11.081438 2333562 system_pods.go:61] "storage-provisioner" [473466d1-f407-4b35-b662-880c7ee0439a] Running
	I1006 02:35:11.081444 2333562 system_pods.go:74] duration metric: took 177.192885ms to wait for pod list to return data ...
	I1006 02:35:11.081452 2333562 default_sa.go:34] waiting for default service account to be created ...
	I1006 02:35:11.274859 2333562 request.go:629] Waited for 193.305071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1006 02:35:11.274917 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1006 02:35:11.274923 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:11.274936 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:11.274951 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:11.277427 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:11.277450 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:11.277467 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:11 GMT
	I1006 02:35:11.277474 2333562 round_trippers.go:580]     Audit-Id: 0147fc93-ea2b-496c-9dbc-c20b94447f09
	I1006 02:35:11.277480 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:11.277487 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:11.277497 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:11.277503 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:11.277513 2333562 round_trippers.go:580]     Content-Length: 261
	I1006 02:35:11.277538 2333562 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c28c6e62-ac7c-4aef-ace4-4b93152f58c1","resourceVersion":"337","creationTimestamp":"2023-10-06T02:34:36Z"}}]}
	I1006 02:35:11.277740 2333562 default_sa.go:45] found service account: "default"
	I1006 02:35:11.277755 2333562 default_sa.go:55] duration metric: took 196.296746ms for default service account to be created ...
	I1006 02:35:11.277764 2333562 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 02:35:11.474076 2333562 request.go:629] Waited for 196.251142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1006 02:35:11.474162 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1006 02:35:11.474175 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:11.474186 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:11.474195 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:11.477762 2333562 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1006 02:35:11.477838 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:11.477861 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:11.477883 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:11.477906 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:11.477928 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:11.477981 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:11 GMT
	I1006 02:35:11.478009 2333562 round_trippers.go:580]     Audit-Id: de8f7c32-7103-499f-b835-48119e5d5645
	I1006 02:35:11.478373 2333562 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-5dd5756b68-tswm4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"82def565-623c-4885-b2a3-87c5302c1841","resourceVersion":"445","creationTimestamp":"2023-10-06T02:34:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"61adee40-8c8c-4b55-82f7-ad79fd22292d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61adee40-8c8c-4b55-82f7-ad79fd22292d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1006 02:35:11.480805 2333562 system_pods.go:86] 8 kube-system pods found
	I1006 02:35:11.480832 2333562 system_pods.go:89] "coredns-5dd5756b68-tswm4" [82def565-623c-4885-b2a3-87c5302c1841] Running
	I1006 02:35:11.480840 2333562 system_pods.go:89] "etcd-multinode-951739" [bef22c05-be2f-4ea4-822d-2eba636c713e] Running
	I1006 02:35:11.480845 2333562 system_pods.go:89] "kindnet-6r6sg" [7db99007-1339-4908-9602-17cc612ce27b] Running
	I1006 02:35:11.480851 2333562 system_pods.go:89] "kube-apiserver-multinode-951739" [7129e4a8-1667-4441-b00d-5e0f59264803] Running
	I1006 02:35:11.480856 2333562 system_pods.go:89] "kube-controller-manager-multinode-951739" [8309b551-13a7-4115-a9a7-8e1f482fbdf4] Running
	I1006 02:35:11.480861 2333562 system_pods.go:89] "kube-proxy-lwrtj" [a24c85d4-5722-49cd-bfd9-adc611cca199] Running
	I1006 02:35:11.480872 2333562 system_pods.go:89] "kube-scheduler-multinode-951739" [72163d36-23a0-4b32-b6bb-8c79dc9145b6] Running
	I1006 02:35:11.480881 2333562 system_pods.go:89] "storage-provisioner" [473466d1-f407-4b35-b662-880c7ee0439a] Running
	I1006 02:35:11.480888 2333562 system_pods.go:126] duration metric: took 203.116914ms to wait for k8s-apps to be running ...
	I1006 02:35:11.480899 2333562 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 02:35:11.480959 2333562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:35:11.494801 2333562 system_svc.go:56] duration metric: took 13.892609ms WaitForService to wait for kubelet.
	I1006 02:35:11.494828 2333562 kubeadm.go:581] duration metric: took 34.796910968s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1006 02:35:11.494848 2333562 node_conditions.go:102] verifying NodePressure condition ...
	I1006 02:35:11.674173 2333562 request.go:629] Waited for 179.239918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1006 02:35:11.674227 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1006 02:35:11.674237 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:11.674246 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:11.674256 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:11.676923 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:11.676979 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:11.676988 2333562 round_trippers.go:580]     Audit-Id: c030f3ab-b3dd-4e09-8278-a0eab134094c
	I1006 02:35:11.676995 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:11.677001 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:11.677007 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:11.677013 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:11.677020 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:11 GMT
	I1006 02:35:11.677128 2333562 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1006 02:35:11.677574 2333562 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 02:35:11.677599 2333562 node_conditions.go:123] node cpu capacity is 2
	I1006 02:35:11.677614 2333562 node_conditions.go:105] duration metric: took 182.761607ms to run NodePressure ...
	I1006 02:35:11.677631 2333562 start.go:228] waiting for startup goroutines ...
	I1006 02:35:11.677643 2333562 start.go:233] waiting for cluster config update ...
	I1006 02:35:11.677654 2333562 start.go:242] writing updated cluster config ...
	I1006 02:35:11.680037 2333562 out.go:177] 
	I1006 02:35:11.682017 2333562 config.go:182] Loaded profile config "multinode-951739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:35:11.682111 2333562 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/config.json ...
	I1006 02:35:11.684363 2333562 out.go:177] * Starting worker node multinode-951739-m02 in cluster multinode-951739
	I1006 02:35:11.686277 2333562 cache.go:122] Beginning downloading kic base image for docker with crio
	I1006 02:35:11.688061 2333562 out.go:177] * Pulling base image ...
	I1006 02:35:11.690355 2333562 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:35:11.690385 2333562 cache.go:57] Caching tarball of preloaded images
	I1006 02:35:11.690438 2333562 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1006 02:35:11.690517 2333562 preload.go:174] Found /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 02:35:11.690528 2333562 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1006 02:35:11.690653 2333562 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/config.json ...
	I1006 02:35:11.707892 2333562 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1006 02:35:11.707919 2333562 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1006 02:35:11.707940 2333562 cache.go:195] Successfully downloaded all kic artifacts
	I1006 02:35:11.707968 2333562 start.go:365] acquiring machines lock for multinode-951739-m02: {Name:mk7dfb4da3f079a17f7f7aa3a5af81e348fb055c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:35:11.708131 2333562 start.go:369] acquired machines lock for "multinode-951739-m02" in 142.308µs
	I1006 02:35:11.708162 2333562 start.go:93] Provisioning new machine with config: &{Name:multinode-951739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-951739 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1006 02:35:11.708255 2333562 start.go:125] createHost starting for "m02" (driver="docker")
	I1006 02:35:11.711720 2333562 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1006 02:35:11.711840 2333562 start.go:159] libmachine.API.Create for "multinode-951739" (driver="docker")
	I1006 02:35:11.711871 2333562 client.go:168] LocalClient.Create starting
	I1006 02:35:11.711950 2333562 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem
	I1006 02:35:11.711989 2333562 main.go:141] libmachine: Decoding PEM data...
	I1006 02:35:11.712005 2333562 main.go:141] libmachine: Parsing certificate...
	I1006 02:35:11.712064 2333562 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem
	I1006 02:35:11.712082 2333562 main.go:141] libmachine: Decoding PEM data...
	I1006 02:35:11.712092 2333562 main.go:141] libmachine: Parsing certificate...
	I1006 02:35:11.712327 2333562 cli_runner.go:164] Run: docker network inspect multinode-951739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:35:11.730013 2333562 network_create.go:77] Found existing network {name:multinode-951739 subnet:0x40033f9050 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1006 02:35:11.730055 2333562 kic.go:118] calculated static IP "192.168.58.3" for the "multinode-951739-m02" container
	I1006 02:35:11.730129 2333562 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 02:35:11.746928 2333562 cli_runner.go:164] Run: docker volume create multinode-951739-m02 --label name.minikube.sigs.k8s.io=multinode-951739-m02 --label created_by.minikube.sigs.k8s.io=true
	I1006 02:35:11.764963 2333562 oci.go:103] Successfully created a docker volume multinode-951739-m02
	I1006 02:35:11.765056 2333562 cli_runner.go:164] Run: docker run --rm --name multinode-951739-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-951739-m02 --entrypoint /usr/bin/test -v multinode-951739-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1006 02:35:12.394526 2333562 oci.go:107] Successfully prepared a docker volume multinode-951739-m02
	I1006 02:35:12.394578 2333562 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:35:12.394598 2333562 kic.go:191] Starting extracting preloaded images to volume ...
	I1006 02:35:12.394679 2333562 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-951739-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 02:35:16.597327 2333562 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-951739-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.202599273s)
	I1006 02:35:16.597362 2333562 kic.go:200] duration metric: took 4.202759 seconds to extract preloaded images to volume
	W1006 02:35:16.597554 2333562 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 02:35:16.597672 2333562 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 02:35:16.674235 2333562 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-951739-m02 --name multinode-951739-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-951739-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-951739-m02 --network multinode-951739 --ip 192.168.58.3 --volume multinode-951739-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1006 02:35:17.050513 2333562 cli_runner.go:164] Run: docker container inspect multinode-951739-m02 --format={{.State.Running}}
	I1006 02:35:17.072547 2333562 cli_runner.go:164] Run: docker container inspect multinode-951739-m02 --format={{.State.Status}}
	I1006 02:35:17.094161 2333562 cli_runner.go:164] Run: docker exec multinode-951739-m02 stat /var/lib/dpkg/alternatives/iptables
	I1006 02:35:17.169267 2333562 oci.go:144] the created container "multinode-951739-m02" has a running status.
	I1006 02:35:17.169293 2333562 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739-m02/id_rsa...
	I1006 02:35:17.612490 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1006 02:35:17.612580 2333562 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 02:35:17.643890 2333562 cli_runner.go:164] Run: docker container inspect multinode-951739-m02 --format={{.State.Status}}
	I1006 02:35:17.679271 2333562 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 02:35:17.679303 2333562 kic_runner.go:114] Args: [docker exec --privileged multinode-951739-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 02:35:17.781293 2333562 cli_runner.go:164] Run: docker container inspect multinode-951739-m02 --format={{.State.Status}}
	I1006 02:35:17.820829 2333562 machine.go:88] provisioning docker machine ...
	I1006 02:35:17.820859 2333562 ubuntu.go:169] provisioning hostname "multinode-951739-m02"
	I1006 02:35:17.820920 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739-m02
	I1006 02:35:17.869602 2333562 main.go:141] libmachine: Using SSH client type: native
	I1006 02:35:17.870050 2333562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35344 <nil> <nil>}
	I1006 02:35:17.870064 2333562 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-951739-m02 && echo "multinode-951739-m02" | sudo tee /etc/hostname
	I1006 02:35:17.870763 2333562 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 02:35:21.021311 2333562 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-951739-m02
	
	I1006 02:35:21.021395 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739-m02
	I1006 02:35:21.043293 2333562 main.go:141] libmachine: Using SSH client type: native
	I1006 02:35:21.043726 2333562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35344 <nil> <nil>}
	I1006 02:35:21.043750 2333562 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-951739-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-951739-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-951739-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 02:35:21.180613 2333562 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 02:35:21.180639 2333562 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17314-2262959/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-2262959/.minikube}
	I1006 02:35:21.180658 2333562 ubuntu.go:177] setting up certificates
	I1006 02:35:21.180668 2333562 provision.go:83] configureAuth start
	I1006 02:35:21.180732 2333562 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951739-m02
	I1006 02:35:21.199740 2333562 provision.go:138] copyHostCerts
	I1006 02:35:21.199782 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:35:21.199814 2333562 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem, removing ...
	I1006 02:35:21.199825 2333562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:35:21.199902 2333562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem (1123 bytes)
	I1006 02:35:21.199982 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:35:21.200005 2333562 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem, removing ...
	I1006 02:35:21.200010 2333562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:35:21.200051 2333562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem (1675 bytes)
	I1006 02:35:21.200101 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:35:21.200122 2333562 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem, removing ...
	I1006 02:35:21.200126 2333562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:35:21.200156 2333562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem (1082 bytes)
	I1006 02:35:21.200207 2333562 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem org=jenkins.multinode-951739-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-951739-m02]
	I1006 02:35:21.603850 2333562 provision.go:172] copyRemoteCerts
	I1006 02:35:21.603920 2333562 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 02:35:21.603964 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739-m02
	I1006 02:35:21.624438 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35344 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739-m02/id_rsa Username:docker}
	I1006 02:35:21.722055 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1006 02:35:21.722119 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 02:35:21.752988 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1006 02:35:21.753075 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1006 02:35:21.786000 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1006 02:35:21.786075 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 02:35:21.817124 2333562 provision.go:86] duration metric: configureAuth took 636.438254ms
	I1006 02:35:21.817150 2333562 ubuntu.go:193] setting minikube options for container-runtime
	I1006 02:35:21.817361 2333562 config.go:182] Loaded profile config "multinode-951739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:35:21.817466 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739-m02
	I1006 02:35:21.837243 2333562 main.go:141] libmachine: Using SSH client type: native
	I1006 02:35:21.837658 2333562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35344 <nil> <nil>}
	I1006 02:35:21.837674 2333562 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 02:35:22.094133 2333562 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 02:35:22.094154 2333562 machine.go:91] provisioned docker machine in 4.273305012s
	I1006 02:35:22.094165 2333562 client.go:171] LocalClient.Create took 10.382283341s
	I1006 02:35:22.094177 2333562 start.go:167] duration metric: libmachine.API.Create for "multinode-951739" took 10.382337158s
	I1006 02:35:22.094185 2333562 start.go:300] post-start starting for "multinode-951739-m02" (driver="docker")
	I1006 02:35:22.094195 2333562 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 02:35:22.094264 2333562 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 02:35:22.094303 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739-m02
	I1006 02:35:22.118717 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35344 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739-m02/id_rsa Username:docker}
	I1006 02:35:22.220847 2333562 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 02:35:22.225278 2333562 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1006 02:35:22.225296 2333562 command_runner.go:130] > NAME="Ubuntu"
	I1006 02:35:22.225304 2333562 command_runner.go:130] > VERSION_ID="22.04"
	I1006 02:35:22.225310 2333562 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1006 02:35:22.225316 2333562 command_runner.go:130] > VERSION_CODENAME=jammy
	I1006 02:35:22.225321 2333562 command_runner.go:130] > ID=ubuntu
	I1006 02:35:22.225326 2333562 command_runner.go:130] > ID_LIKE=debian
	I1006 02:35:22.225332 2333562 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1006 02:35:22.225338 2333562 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1006 02:35:22.225345 2333562 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1006 02:35:22.225354 2333562 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1006 02:35:22.225359 2333562 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1006 02:35:22.225557 2333562 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 02:35:22.225590 2333562 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1006 02:35:22.225606 2333562 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1006 02:35:22.225618 2333562 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1006 02:35:22.225628 2333562 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/addons for local assets ...
	I1006 02:35:22.225691 2333562 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/files for local assets ...
	I1006 02:35:22.225773 2333562 filesync.go:149] local asset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> 22683062.pem in /etc/ssl/certs
	I1006 02:35:22.225785 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> /etc/ssl/certs/22683062.pem
	I1006 02:35:22.225884 2333562 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 02:35:22.237568 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:35:22.267771 2333562 start.go:303] post-start completed in 173.571372ms
	I1006 02:35:22.268136 2333562 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951739-m02
	I1006 02:35:22.288438 2333562 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/config.json ...
	I1006 02:35:22.288732 2333562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:35:22.288780 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739-m02
	I1006 02:35:22.306945 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35344 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739-m02/id_rsa Username:docker}
	I1006 02:35:22.402047 2333562 command_runner.go:130] > 11%!
	(MISSING)I1006 02:35:22.402117 2333562 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 02:35:22.407952 2333562 command_runner.go:130] > 173G
	I1006 02:35:22.408389 2333562 start.go:128] duration metric: createHost completed in 10.700120372s
	I1006 02:35:22.408412 2333562 start.go:83] releasing machines lock for "multinode-951739-m02", held for 10.700267284s
	I1006 02:35:22.408497 2333562 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951739-m02
	I1006 02:35:22.432064 2333562 out.go:177] * Found network options:
	I1006 02:35:22.433883 2333562 out.go:177]   - NO_PROXY=192.168.58.2
	W1006 02:35:22.435966 2333562 proxy.go:119] fail to check proxy env: Error ip not in block
	W1006 02:35:22.436009 2333562 proxy.go:119] fail to check proxy env: Error ip not in block
	I1006 02:35:22.436081 2333562 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 02:35:22.436143 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739-m02
	I1006 02:35:22.436438 2333562 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 02:35:22.436491 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739-m02
	I1006 02:35:22.459407 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35344 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739-m02/id_rsa Username:docker}
	I1006 02:35:22.460425 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35344 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739-m02/id_rsa Username:docker}
	I1006 02:35:22.696360 2333562 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1006 02:35:22.724611 2333562 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 02:35:22.730256 2333562 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1006 02:35:22.730324 2333562 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1006 02:35:22.730339 2333562 command_runner.go:130] > Device: b3h/179d	Inode: 1823254     Links: 1
	I1006 02:35:22.730349 2333562 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 02:35:22.730362 2333562 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1006 02:35:22.730372 2333562 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1006 02:35:22.730382 2333562 command_runner.go:130] > Change: 2023-10-06 02:11:31.928487767 +0000
	I1006 02:35:22.730388 2333562 command_runner.go:130] >  Birth: 2023-10-06 02:11:31.928487767 +0000
	I1006 02:35:22.731499 2333562 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:35:22.758201 2333562 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1006 02:35:22.758286 2333562 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:35:22.804119 2333562 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1006 02:35:22.804219 2333562 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1006 02:35:22.804252 2333562 start.go:472] detecting cgroup driver to use...
	I1006 02:35:22.804290 2333562 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1006 02:35:22.804361 2333562 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 02:35:22.824393 2333562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 02:35:22.838327 2333562 docker.go:198] disabling cri-docker service (if available) ...
	I1006 02:35:22.838421 2333562 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 02:35:22.855847 2333562 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 02:35:22.873435 2333562 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 02:35:22.980524 2333562 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 02:35:23.089636 2333562 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1006 02:35:23.089665 2333562 docker.go:214] disabling docker service ...
	I1006 02:35:23.089716 2333562 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 02:35:23.112197 2333562 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 02:35:23.126880 2333562 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 02:35:23.219091 2333562 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1006 02:35:23.219161 2333562 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 02:35:23.336224 2333562 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1006 02:35:23.336297 2333562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 02:35:23.350475 2333562 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 02:35:23.380880 2333562 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1006 02:35:23.380933 2333562 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1006 02:35:23.381007 2333562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:35:23.393380 2333562 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 02:35:23.393453 2333562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:35:23.405642 2333562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:35:23.417387 2333562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:35:23.429701 2333562 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 02:35:23.440887 2333562 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 02:35:23.450145 2333562 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1006 02:35:23.451394 2333562 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 02:35:23.461572 2333562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 02:35:23.561032 2333562 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 02:35:23.707959 2333562 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 02:35:23.708090 2333562 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 02:35:23.712853 2333562 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1006 02:35:23.712913 2333562 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1006 02:35:23.712944 2333562 command_runner.go:130] > Device: bdh/189d	Inode: 190         Links: 1
	I1006 02:35:23.712966 2333562 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 02:35:23.712985 2333562 command_runner.go:130] > Access: 2023-10-06 02:35:23.693655621 +0000
	I1006 02:35:23.713024 2333562 command_runner.go:130] > Modify: 2023-10-06 02:35:23.693655621 +0000
	I1006 02:35:23.713045 2333562 command_runner.go:130] > Change: 2023-10-06 02:35:23.693655621 +0000
	I1006 02:35:23.713066 2333562 command_runner.go:130] >  Birth: -
	I1006 02:35:23.713379 2333562 start.go:540] Will wait 60s for crictl version
	I1006 02:35:23.713474 2333562 ssh_runner.go:195] Run: which crictl
	I1006 02:35:23.717504 2333562 command_runner.go:130] > /usr/bin/crictl
	I1006 02:35:23.718019 2333562 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 02:35:23.757201 2333562 command_runner.go:130] > Version:  0.1.0
	I1006 02:35:23.757269 2333562 command_runner.go:130] > RuntimeName:  cri-o
	I1006 02:35:23.757297 2333562 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1006 02:35:23.757323 2333562 command_runner.go:130] > RuntimeApiVersion:  v1
	I1006 02:35:23.759811 2333562 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1006 02:35:23.759968 2333562 ssh_runner.go:195] Run: crio --version
	I1006 02:35:23.807430 2333562 command_runner.go:130] > crio version 1.24.6
	I1006 02:35:23.807453 2333562 command_runner.go:130] > Version:          1.24.6
	I1006 02:35:23.807462 2333562 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1006 02:35:23.807467 2333562 command_runner.go:130] > GitTreeState:     clean
	I1006 02:35:23.807475 2333562 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1006 02:35:23.807489 2333562 command_runner.go:130] > GoVersion:        go1.18.2
	I1006 02:35:23.807494 2333562 command_runner.go:130] > Compiler:         gc
	I1006 02:35:23.807500 2333562 command_runner.go:130] > Platform:         linux/arm64
	I1006 02:35:23.807510 2333562 command_runner.go:130] > Linkmode:         dynamic
	I1006 02:35:23.807523 2333562 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1006 02:35:23.807531 2333562 command_runner.go:130] > SeccompEnabled:   true
	I1006 02:35:23.807537 2333562 command_runner.go:130] > AppArmorEnabled:  false
	I1006 02:35:23.809600 2333562 ssh_runner.go:195] Run: crio --version
	I1006 02:35:23.852952 2333562 command_runner.go:130] > crio version 1.24.6
	I1006 02:35:23.852973 2333562 command_runner.go:130] > Version:          1.24.6
	I1006 02:35:23.852983 2333562 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1006 02:35:23.852989 2333562 command_runner.go:130] > GitTreeState:     clean
	I1006 02:35:23.852997 2333562 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1006 02:35:23.853003 2333562 command_runner.go:130] > GoVersion:        go1.18.2
	I1006 02:35:23.853008 2333562 command_runner.go:130] > Compiler:         gc
	I1006 02:35:23.853020 2333562 command_runner.go:130] > Platform:         linux/arm64
	I1006 02:35:23.853029 2333562 command_runner.go:130] > Linkmode:         dynamic
	I1006 02:35:23.853041 2333562 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1006 02:35:23.853049 2333562 command_runner.go:130] > SeccompEnabled:   true
	I1006 02:35:23.853054 2333562 command_runner.go:130] > AppArmorEnabled:  false
	I1006 02:35:23.857557 2333562 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1006 02:35:23.859570 2333562 out.go:177]   - env NO_PROXY=192.168.58.2
	I1006 02:35:23.861906 2333562 cli_runner.go:164] Run: docker network inspect multinode-951739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:35:23.879958 2333562 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1006 02:35:23.884602 2333562 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 02:35:23.898000 2333562 certs.go:56] Setting up /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739 for IP: 192.168.58.3
	I1006 02:35:23.898031 2333562 certs.go:190] acquiring lock for shared ca certs: {Name:mk4532656c287f04abff160cc5263fddcb69ac4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:35:23.898169 2333562 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key
	I1006 02:35:23.898221 2333562 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key
	I1006 02:35:23.898235 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1006 02:35:23.898248 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1006 02:35:23.898263 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1006 02:35:23.898274 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1006 02:35:23.898332 2333562 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem (1338 bytes)
	W1006 02:35:23.898366 2333562 certs.go:433] ignoring /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306_empty.pem, impossibly tiny 0 bytes
	I1006 02:35:23.898379 2333562 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 02:35:23.898409 2333562 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem (1082 bytes)
	I1006 02:35:23.898439 2333562 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem (1123 bytes)
	I1006 02:35:23.898471 2333562 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem (1675 bytes)
	I1006 02:35:23.898520 2333562 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:35:23.898550 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:35:23.898566 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem -> /usr/share/ca-certificates/2268306.pem
	I1006 02:35:23.898576 2333562 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> /usr/share/ca-certificates/22683062.pem
	I1006 02:35:23.898894 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 02:35:23.928132 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 02:35:23.956877 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 02:35:23.985240 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 02:35:24.016671 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 02:35:24.047362 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem --> /usr/share/ca-certificates/2268306.pem (1338 bytes)
	I1006 02:35:24.077154 2333562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /usr/share/ca-certificates/22683062.pem (1708 bytes)
	I1006 02:35:24.108372 2333562 ssh_runner.go:195] Run: openssl version
	I1006 02:35:24.115175 2333562 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1006 02:35:24.115611 2333562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 02:35:24.127576 2333562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:35:24.132365 2333562 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  6 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:35:24.132629 2333562 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  6 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:35:24.132714 2333562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:35:24.141320 2333562 command_runner.go:130] > b5213941
	I1006 02:35:24.141713 2333562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 02:35:24.153081 2333562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2268306.pem && ln -fs /usr/share/ca-certificates/2268306.pem /etc/ssl/certs/2268306.pem"
	I1006 02:35:24.164647 2333562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2268306.pem
	I1006 02:35:24.169403 2333562 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  6 02:19 /usr/share/ca-certificates/2268306.pem
	I1006 02:35:24.169449 2333562 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  6 02:19 /usr/share/ca-certificates/2268306.pem
	I1006 02:35:24.169504 2333562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2268306.pem
	I1006 02:35:24.178021 2333562 command_runner.go:130] > 51391683
	I1006 02:35:24.178096 2333562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2268306.pem /etc/ssl/certs/51391683.0"
	I1006 02:35:24.190313 2333562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22683062.pem && ln -fs /usr/share/ca-certificates/22683062.pem /etc/ssl/certs/22683062.pem"
	I1006 02:35:24.205618 2333562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22683062.pem
	I1006 02:35:24.211528 2333562 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  6 02:19 /usr/share/ca-certificates/22683062.pem
	I1006 02:35:24.211579 2333562 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  6 02:19 /usr/share/ca-certificates/22683062.pem
	I1006 02:35:24.211632 2333562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22683062.pem
	I1006 02:35:24.220170 2333562 command_runner.go:130] > 3ec20f2e
	I1006 02:35:24.220601 2333562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22683062.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 02:35:24.232381 2333562 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1006 02:35:24.236910 2333562 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1006 02:35:24.236943 2333562 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1006 02:35:24.237090 2333562 ssh_runner.go:195] Run: crio config
	I1006 02:35:24.291274 2333562 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1006 02:35:24.291300 2333562 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1006 02:35:24.291310 2333562 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1006 02:35:24.291314 2333562 command_runner.go:130] > #
	I1006 02:35:24.291348 2333562 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1006 02:35:24.291360 2333562 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1006 02:35:24.291368 2333562 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1006 02:35:24.291380 2333562 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1006 02:35:24.291385 2333562 command_runner.go:130] > # reload'.
	I1006 02:35:24.291395 2333562 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1006 02:35:24.291449 2333562 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1006 02:35:24.291462 2333562 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1006 02:35:24.291470 2333562 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1006 02:35:24.291477 2333562 command_runner.go:130] > [crio]
	I1006 02:35:24.291487 2333562 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1006 02:35:24.291494 2333562 command_runner.go:130] > # containers images, in this directory.
	I1006 02:35:24.292417 2333562 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1006 02:35:24.292441 2333562 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1006 02:35:24.293186 2333562 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1006 02:35:24.293203 2333562 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1006 02:35:24.293235 2333562 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1006 02:35:24.293933 2333562 command_runner.go:130] > # storage_driver = "vfs"
	I1006 02:35:24.293950 2333562 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1006 02:35:24.293982 2333562 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1006 02:35:24.294337 2333562 command_runner.go:130] > # storage_option = [
	I1006 02:35:24.294759 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.294776 2333562 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1006 02:35:24.294807 2333562 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1006 02:35:24.295570 2333562 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1006 02:35:24.295586 2333562 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1006 02:35:24.295614 2333562 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1006 02:35:24.295626 2333562 command_runner.go:130] > # always happen on a node reboot
	I1006 02:35:24.296437 2333562 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1006 02:35:24.296460 2333562 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1006 02:35:24.296468 2333562 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1006 02:35:24.296517 2333562 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1006 02:35:24.297264 2333562 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1006 02:35:24.297313 2333562 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1006 02:35:24.297328 2333562 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1006 02:35:24.298083 2333562 command_runner.go:130] > # internal_wipe = true
	I1006 02:35:24.298104 2333562 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1006 02:35:24.298113 2333562 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1006 02:35:24.298140 2333562 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1006 02:35:24.299054 2333562 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1006 02:35:24.299070 2333562 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1006 02:35:24.299075 2333562 command_runner.go:130] > [crio.api]
	I1006 02:35:24.299081 2333562 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1006 02:35:24.300004 2333562 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1006 02:35:24.300023 2333562 command_runner.go:130] > # IP address on which the stream server will listen.
	I1006 02:35:24.300950 2333562 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1006 02:35:24.300973 2333562 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1006 02:35:24.300980 2333562 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1006 02:35:24.301866 2333562 command_runner.go:130] > # stream_port = "0"
	I1006 02:35:24.301883 2333562 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1006 02:35:24.302672 2333562 command_runner.go:130] > # stream_enable_tls = false
	I1006 02:35:24.302692 2333562 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1006 02:35:24.303360 2333562 command_runner.go:130] > # stream_idle_timeout = ""
	I1006 02:35:24.303382 2333562 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1006 02:35:24.303391 2333562 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1006 02:35:24.303396 2333562 command_runner.go:130] > # minutes.
	I1006 02:35:24.303970 2333562 command_runner.go:130] > # stream_tls_cert = ""
	I1006 02:35:24.303994 2333562 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1006 02:35:24.304002 2333562 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1006 02:35:24.304647 2333562 command_runner.go:130] > # stream_tls_key = ""
	I1006 02:35:24.304670 2333562 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1006 02:35:24.304680 2333562 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1006 02:35:24.304705 2333562 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1006 02:35:24.305269 2333562 command_runner.go:130] > # stream_tls_ca = ""
	I1006 02:35:24.305291 2333562 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1006 02:35:24.306054 2333562 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1006 02:35:24.306077 2333562 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1006 02:35:24.306885 2333562 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1006 02:35:24.306931 2333562 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1006 02:35:24.306946 2333562 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1006 02:35:24.306951 2333562 command_runner.go:130] > [crio.runtime]
	I1006 02:35:24.306959 2333562 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1006 02:35:24.306970 2333562 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1006 02:35:24.306975 2333562 command_runner.go:130] > # "nofile=1024:2048"
	I1006 02:35:24.306983 2333562 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1006 02:35:24.307227 2333562 command_runner.go:130] > # default_ulimits = [
	I1006 02:35:24.307449 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.307471 2333562 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1006 02:35:24.307477 2333562 command_runner.go:130] > # no_pivot = false
	I1006 02:35:24.307497 2333562 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1006 02:35:24.307508 2333562 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1006 02:35:24.307515 2333562 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1006 02:35:24.307527 2333562 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1006 02:35:24.307533 2333562 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1006 02:35:24.307546 2333562 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 02:35:24.307551 2333562 command_runner.go:130] > # conmon = ""
	I1006 02:35:24.307575 2333562 command_runner.go:130] > # Cgroup setting for conmon
	I1006 02:35:24.307600 2333562 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1006 02:35:24.307617 2333562 command_runner.go:130] > conmon_cgroup = "pod"
	I1006 02:35:24.307629 2333562 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1006 02:35:24.307637 2333562 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1006 02:35:24.307648 2333562 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1006 02:35:24.307665 2333562 command_runner.go:130] > # conmon_env = [
	I1006 02:35:24.307677 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.307700 2333562 command_runner.go:130] > # Additional environment variables to set for all the
	I1006 02:35:24.307713 2333562 command_runner.go:130] > # containers. These are overridden if set in the
	I1006 02:35:24.307721 2333562 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1006 02:35:24.307731 2333562 command_runner.go:130] > # default_env = [
	I1006 02:35:24.307736 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.307745 2333562 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1006 02:35:24.307752 2333562 command_runner.go:130] > # selinux = false
	I1006 02:35:24.307773 2333562 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1006 02:35:24.307789 2333562 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1006 02:35:24.307806 2333562 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1006 02:35:24.307817 2333562 command_runner.go:130] > # seccomp_profile = ""
	I1006 02:35:24.307831 2333562 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1006 02:35:24.307843 2333562 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1006 02:35:24.307851 2333562 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1006 02:35:24.307860 2333562 command_runner.go:130] > # which might increase security.
	I1006 02:35:24.308139 2333562 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1006 02:35:24.308156 2333562 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1006 02:35:24.308180 2333562 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1006 02:35:24.308192 2333562 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1006 02:35:24.308200 2333562 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1006 02:35:24.308210 2333562 command_runner.go:130] > # This option supports live configuration reload.
	I1006 02:35:24.308216 2333562 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1006 02:35:24.308223 2333562 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1006 02:35:24.308229 2333562 command_runner.go:130] > # the cgroup blockio controller.
	I1006 02:35:24.308453 2333562 command_runner.go:130] > # blockio_config_file = ""
	I1006 02:35:24.308470 2333562 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1006 02:35:24.308495 2333562 command_runner.go:130] > # irqbalance daemon.
	I1006 02:35:24.308509 2333562 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1006 02:35:24.308517 2333562 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1006 02:35:24.308528 2333562 command_runner.go:130] > # This option supports live configuration reload.
	I1006 02:35:24.308534 2333562 command_runner.go:130] > # rdt_config_file = ""
	I1006 02:35:24.308541 2333562 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1006 02:35:24.308549 2333562 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1006 02:35:24.308568 2333562 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1006 02:35:24.308577 2333562 command_runner.go:130] > # separate_pull_cgroup = ""
	I1006 02:35:24.308596 2333562 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1006 02:35:24.308612 2333562 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1006 02:35:24.308619 2333562 command_runner.go:130] > # will be added.
	I1006 02:35:24.308630 2333562 command_runner.go:130] > # default_capabilities = [
	I1006 02:35:24.308635 2333562 command_runner.go:130] > # 	"CHOWN",
	I1006 02:35:24.308643 2333562 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1006 02:35:24.308648 2333562 command_runner.go:130] > # 	"FSETID",
	I1006 02:35:24.308653 2333562 command_runner.go:130] > # 	"FOWNER",
	I1006 02:35:24.308672 2333562 command_runner.go:130] > # 	"SETGID",
	I1006 02:35:24.308683 2333562 command_runner.go:130] > # 	"SETUID",
	I1006 02:35:24.308688 2333562 command_runner.go:130] > # 	"SETPCAP",
	I1006 02:35:24.308706 2333562 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1006 02:35:24.308717 2333562 command_runner.go:130] > # 	"KILL",
	I1006 02:35:24.308721 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.308731 2333562 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1006 02:35:24.308741 2333562 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1006 02:35:24.308752 2333562 command_runner.go:130] > # add_inheritable_capabilities = true
	I1006 02:35:24.308760 2333562 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1006 02:35:24.308781 2333562 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 02:35:24.308793 2333562 command_runner.go:130] > # default_sysctls = [
	I1006 02:35:24.308799 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.308805 2333562 command_runner.go:130] > # List of devices on the host that a
	I1006 02:35:24.308818 2333562 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1006 02:35:24.308823 2333562 command_runner.go:130] > # allowed_devices = [
	I1006 02:35:24.308828 2333562 command_runner.go:130] > # 	"/dev/fuse",
	I1006 02:35:24.308836 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.308842 2333562 command_runner.go:130] > # List of additional devices. specified as
	I1006 02:35:24.308882 2333562 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1006 02:35:24.308897 2333562 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1006 02:35:24.308906 2333562 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1006 02:35:24.308912 2333562 command_runner.go:130] > # additional_devices = [
	I1006 02:35:24.308920 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.308928 2333562 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1006 02:35:24.308951 2333562 command_runner.go:130] > # cdi_spec_dirs = [
	I1006 02:35:24.308963 2333562 command_runner.go:130] > # 	"/etc/cdi",
	I1006 02:35:24.308970 2333562 command_runner.go:130] > # 	"/var/run/cdi",
	I1006 02:35:24.308979 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.308986 2333562 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1006 02:35:24.309001 2333562 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1006 02:35:24.309006 2333562 command_runner.go:130] > # Defaults to false.
	I1006 02:35:24.309029 2333562 command_runner.go:130] > # device_ownership_from_security_context = false
	I1006 02:35:24.309045 2333562 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1006 02:35:24.309064 2333562 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1006 02:35:24.309076 2333562 command_runner.go:130] > # hooks_dir = [
	I1006 02:35:24.309083 2333562 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1006 02:35:24.309088 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.309095 2333562 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1006 02:35:24.309110 2333562 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1006 02:35:24.309117 2333562 command_runner.go:130] > # its default mounts from the following two files:
	I1006 02:35:24.309133 2333562 command_runner.go:130] > #
	I1006 02:35:24.309148 2333562 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1006 02:35:24.309157 2333562 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1006 02:35:24.309168 2333562 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1006 02:35:24.309173 2333562 command_runner.go:130] > #
	I1006 02:35:24.309180 2333562 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1006 02:35:24.309194 2333562 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1006 02:35:24.309213 2333562 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1006 02:35:24.309227 2333562 command_runner.go:130] > #      only add mounts it finds in this file.
	I1006 02:35:24.309232 2333562 command_runner.go:130] > #
	I1006 02:35:24.309247 2333562 command_runner.go:130] > # default_mounts_file = ""
	I1006 02:35:24.309259 2333562 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1006 02:35:24.309269 2333562 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1006 02:35:24.309276 2333562 command_runner.go:130] > # pids_limit = 0
	I1006 02:35:24.309284 2333562 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1006 02:35:24.309295 2333562 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1006 02:35:24.309304 2333562 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1006 02:35:24.309335 2333562 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1006 02:35:24.309624 2333562 command_runner.go:130] > # log_size_max = -1
	I1006 02:35:24.309643 2333562 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1006 02:35:24.309663 2333562 command_runner.go:130] > # log_to_journald = false
	I1006 02:35:24.309679 2333562 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1006 02:35:24.309688 2333562 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1006 02:35:24.309698 2333562 command_runner.go:130] > # Path to directory for container attach sockets.
	I1006 02:35:24.309705 2333562 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1006 02:35:24.309715 2333562 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1006 02:35:24.309721 2333562 command_runner.go:130] > # bind_mount_prefix = ""
	I1006 02:35:24.309756 2333562 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1006 02:35:24.309771 2333562 command_runner.go:130] > # read_only = false
	I1006 02:35:24.309780 2333562 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1006 02:35:24.309793 2333562 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1006 02:35:24.309799 2333562 command_runner.go:130] > # live configuration reload.
	I1006 02:35:24.309807 2333562 command_runner.go:130] > # log_level = "info"
	I1006 02:35:24.309814 2333562 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1006 02:35:24.309844 2333562 command_runner.go:130] > # This option supports live configuration reload.
	I1006 02:35:24.309857 2333562 command_runner.go:130] > # log_filter = ""
	I1006 02:35:24.309866 2333562 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1006 02:35:24.309885 2333562 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1006 02:35:24.309891 2333562 command_runner.go:130] > # separated by comma.
	I1006 02:35:24.309899 2333562 command_runner.go:130] > # uid_mappings = ""
	I1006 02:35:24.309907 2333562 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1006 02:35:24.309938 2333562 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1006 02:35:24.309950 2333562 command_runner.go:130] > # separated by comma.
	I1006 02:35:24.309956 2333562 command_runner.go:130] > # gid_mappings = ""
	I1006 02:35:24.309969 2333562 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1006 02:35:24.309978 2333562 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 02:35:24.309988 2333562 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 02:35:24.309995 2333562 command_runner.go:130] > # minimum_mappable_uid = -1
	I1006 02:35:24.310024 2333562 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1006 02:35:24.310039 2333562 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1006 02:35:24.310048 2333562 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1006 02:35:24.310058 2333562 command_runner.go:130] > # minimum_mappable_gid = -1
	I1006 02:35:24.310067 2333562 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1006 02:35:24.310078 2333562 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1006 02:35:24.310086 2333562 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1006 02:35:24.310104 2333562 command_runner.go:130] > # ctr_stop_timeout = 30
	I1006 02:35:24.310129 2333562 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1006 02:35:24.310144 2333562 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1006 02:35:24.310152 2333562 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1006 02:35:24.310161 2333562 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1006 02:35:24.310167 2333562 command_runner.go:130] > # drop_infra_ctr = true
	I1006 02:35:24.310175 2333562 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1006 02:35:24.310205 2333562 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1006 02:35:24.310221 2333562 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1006 02:35:24.310229 2333562 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1006 02:35:24.310241 2333562 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1006 02:35:24.310247 2333562 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1006 02:35:24.310256 2333562 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1006 02:35:24.310265 2333562 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1006 02:35:24.310708 2333562 command_runner.go:130] > # pinns_path = ""
	I1006 02:35:24.310724 2333562 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1006 02:35:24.310733 2333562 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1006 02:35:24.310756 2333562 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1006 02:35:24.310767 2333562 command_runner.go:130] > # default_runtime = "runc"
	I1006 02:35:24.310781 2333562 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1006 02:35:24.310794 2333562 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1006 02:35:24.310809 2333562 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1006 02:35:24.310830 2333562 command_runner.go:130] > # creation as a file is not desired either.
	I1006 02:35:24.310855 2333562 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1006 02:35:24.310872 2333562 command_runner.go:130] > # the hostname is being managed dynamically.
	I1006 02:35:24.310883 2333562 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1006 02:35:24.310891 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.310899 2333562 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1006 02:35:24.310910 2333562 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1006 02:35:24.310918 2333562 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1006 02:35:24.310946 2333562 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1006 02:35:24.310957 2333562 command_runner.go:130] > #
	I1006 02:35:24.310969 2333562 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1006 02:35:24.310979 2333562 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1006 02:35:24.310985 2333562 command_runner.go:130] > #  runtime_type = "oci"
	I1006 02:35:24.310999 2333562 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1006 02:35:24.311022 2333562 command_runner.go:130] > #  privileged_without_host_devices = false
	I1006 02:35:24.311039 2333562 command_runner.go:130] > #  allowed_annotations = []
	I1006 02:35:24.311064 2333562 command_runner.go:130] > # Where:
	I1006 02:35:24.311072 2333562 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1006 02:35:24.311086 2333562 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1006 02:35:24.311098 2333562 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1006 02:35:24.311118 2333562 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1006 02:35:24.311129 2333562 command_runner.go:130] > #   in $PATH.
	I1006 02:35:24.311148 2333562 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1006 02:35:24.311161 2333562 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1006 02:35:24.311169 2333562 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1006 02:35:24.311178 2333562 command_runner.go:130] > #   state.
	I1006 02:35:24.311186 2333562 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1006 02:35:24.311197 2333562 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1006 02:35:24.311218 2333562 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1006 02:35:24.311232 2333562 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1006 02:35:24.311257 2333562 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1006 02:35:24.311271 2333562 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1006 02:35:24.311281 2333562 command_runner.go:130] > #   The currently recognized values are:
	I1006 02:35:24.311293 2333562 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1006 02:35:24.311302 2333562 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1006 02:35:24.311314 2333562 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1006 02:35:24.311376 2333562 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1006 02:35:24.311406 2333562 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1006 02:35:24.311434 2333562 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1006 02:35:24.311448 2333562 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1006 02:35:24.311461 2333562 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1006 02:35:24.311468 2333562 command_runner.go:130] > #   should be moved to the container's cgroup
	I1006 02:35:24.311486 2333562 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1006 02:35:24.311498 2333562 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1006 02:35:24.311506 2333562 command_runner.go:130] > runtime_type = "oci"
	I1006 02:35:24.311512 2333562 command_runner.go:130] > runtime_root = "/run/runc"
	I1006 02:35:24.311521 2333562 command_runner.go:130] > runtime_config_path = ""
	I1006 02:35:24.311530 2333562 command_runner.go:130] > monitor_path = ""
	I1006 02:35:24.311535 2333562 command_runner.go:130] > monitor_cgroup = ""
	I1006 02:35:24.311543 2333562 command_runner.go:130] > monitor_exec_cgroup = ""
	I1006 02:35:24.311573 2333562 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1006 02:35:24.311599 2333562 command_runner.go:130] > # running containers
	I1006 02:35:24.311611 2333562 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1006 02:35:24.311619 2333562 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1006 02:35:24.311632 2333562 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1006 02:35:24.311639 2333562 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1006 02:35:24.311649 2333562 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1006 02:35:24.311655 2333562 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1006 02:35:24.311664 2333562 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1006 02:35:24.311685 2333562 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1006 02:35:24.311705 2333562 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1006 02:35:24.311711 2333562 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1006 02:35:24.311724 2333562 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1006 02:35:24.311734 2333562 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1006 02:35:24.311819 2333562 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1006 02:35:24.311838 2333562 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1006 02:35:24.311875 2333562 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1006 02:35:24.311889 2333562 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1006 02:35:24.311904 2333562 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1006 02:35:24.311930 2333562 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1006 02:35:24.311944 2333562 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1006 02:35:24.311963 2333562 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1006 02:35:24.311974 2333562 command_runner.go:130] > # Example:
	I1006 02:35:24.311980 2333562 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1006 02:35:24.311990 2333562 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1006 02:35:24.312000 2333562 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1006 02:35:24.312007 2333562 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1006 02:35:24.312016 2333562 command_runner.go:130] > # cpuset = 0
	I1006 02:35:24.312021 2333562 command_runner.go:130] > # cpushares = "0-1"
	I1006 02:35:24.312035 2333562 command_runner.go:130] > # Where:
	I1006 02:35:24.312047 2333562 command_runner.go:130] > # The workload name is workload-type.
	I1006 02:35:24.312067 2333562 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1006 02:35:24.312082 2333562 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1006 02:35:24.312091 2333562 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1006 02:35:24.312106 2333562 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1006 02:35:24.312113 2333562 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1006 02:35:24.312121 2333562 command_runner.go:130] > # 
	I1006 02:35:24.312144 2333562 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1006 02:35:24.312155 2333562 command_runner.go:130] > #
	I1006 02:35:24.312174 2333562 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1006 02:35:24.312189 2333562 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1006 02:35:24.312198 2333562 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1006 02:35:24.312210 2333562 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1006 02:35:24.312225 2333562 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1006 02:35:24.312233 2333562 command_runner.go:130] > [crio.image]
	I1006 02:35:24.312254 2333562 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1006 02:35:24.312276 2333562 command_runner.go:130] > # default_transport = "docker://"
	I1006 02:35:24.312288 2333562 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1006 02:35:24.312299 2333562 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1006 02:35:24.312307 2333562 command_runner.go:130] > # global_auth_file = ""
	I1006 02:35:24.312314 2333562 command_runner.go:130] > # The image used to instantiate infra containers.
	I1006 02:35:24.312324 2333562 command_runner.go:130] > # This option supports live configuration reload.
	I1006 02:35:24.312330 2333562 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1006 02:35:24.312362 2333562 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1006 02:35:24.312377 2333562 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1006 02:35:24.312390 2333562 command_runner.go:130] > # This option supports live configuration reload.
	I1006 02:35:24.312401 2333562 command_runner.go:130] > # pause_image_auth_file = ""
	I1006 02:35:24.312409 2333562 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1006 02:35:24.312421 2333562 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1006 02:35:24.312444 2333562 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1006 02:35:24.312468 2333562 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1006 02:35:24.312481 2333562 command_runner.go:130] > # pause_command = "/pause"
	I1006 02:35:24.312490 2333562 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1006 02:35:24.312501 2333562 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1006 02:35:24.312510 2333562 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1006 02:35:24.312520 2333562 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1006 02:35:24.312530 2333562 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1006 02:35:24.312548 2333562 command_runner.go:130] > # signature_policy = ""
	I1006 02:35:24.312561 2333562 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1006 02:35:24.312570 2333562 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1006 02:35:24.312579 2333562 command_runner.go:130] > # changing them here.
	I1006 02:35:24.312584 2333562 command_runner.go:130] > # insecure_registries = [
	I1006 02:35:24.312588 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.312600 2333562 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1006 02:35:24.312607 2333562 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1006 02:35:24.312621 2333562 command_runner.go:130] > # image_volumes = "mkdir"
	I1006 02:35:24.312629 2333562 command_runner.go:130] > # Temporary directory to use for storing big files
	I1006 02:35:24.312634 2333562 command_runner.go:130] > # big_files_temporary_dir = ""
	I1006 02:35:24.312642 2333562 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1006 02:35:24.312649 2333562 command_runner.go:130] > # CNI plugins.
	I1006 02:35:24.312653 2333562 command_runner.go:130] > [crio.network]
	I1006 02:35:24.312661 2333562 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1006 02:35:24.312667 2333562 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1006 02:35:24.312675 2333562 command_runner.go:130] > # cni_default_network = ""
	I1006 02:35:24.312741 2333562 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1006 02:35:24.312749 2333562 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1006 02:35:24.312757 2333562 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1006 02:35:24.312762 2333562 command_runner.go:130] > # plugin_dirs = [
	I1006 02:35:24.312766 2333562 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1006 02:35:24.312770 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.312777 2333562 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1006 02:35:24.312799 2333562 command_runner.go:130] > [crio.metrics]
	I1006 02:35:24.312807 2333562 command_runner.go:130] > # Globally enable or disable metrics support.
	I1006 02:35:24.312813 2333562 command_runner.go:130] > # enable_metrics = false
	I1006 02:35:24.312818 2333562 command_runner.go:130] > # Specify enabled metrics collectors.
	I1006 02:35:24.312824 2333562 command_runner.go:130] > # Per default all metrics are enabled.
	I1006 02:35:24.312832 2333562 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1006 02:35:24.312839 2333562 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1006 02:35:24.312846 2333562 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1006 02:35:24.312851 2333562 command_runner.go:130] > # metrics_collectors = [
	I1006 02:35:24.312856 2333562 command_runner.go:130] > # 	"operations",
	I1006 02:35:24.312862 2333562 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1006 02:35:24.312867 2333562 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1006 02:35:24.312873 2333562 command_runner.go:130] > # 	"operations_errors",
	I1006 02:35:24.312878 2333562 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1006 02:35:24.312883 2333562 command_runner.go:130] > # 	"image_pulls_by_name",
	I1006 02:35:24.312888 2333562 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1006 02:35:24.312893 2333562 command_runner.go:130] > # 	"image_pulls_failures",
	I1006 02:35:24.312898 2333562 command_runner.go:130] > # 	"image_pulls_successes",
	I1006 02:35:24.312907 2333562 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1006 02:35:24.313175 2333562 command_runner.go:130] > # 	"image_layer_reuse",
	I1006 02:35:24.313192 2333562 command_runner.go:130] > # 	"containers_oom_total",
	I1006 02:35:24.313198 2333562 command_runner.go:130] > # 	"containers_oom",
	I1006 02:35:24.313208 2333562 command_runner.go:130] > # 	"processes_defunct",
	I1006 02:35:24.313213 2333562 command_runner.go:130] > # 	"operations_total",
	I1006 02:35:24.313237 2333562 command_runner.go:130] > # 	"operations_latency_seconds",
	I1006 02:35:24.313252 2333562 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1006 02:35:24.313269 2333562 command_runner.go:130] > # 	"operations_errors_total",
	I1006 02:35:24.313281 2333562 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1006 02:35:24.313287 2333562 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1006 02:35:24.313297 2333562 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1006 02:35:24.313302 2333562 command_runner.go:130] > # 	"image_pulls_success_total",
	I1006 02:35:24.313308 2333562 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1006 02:35:24.313318 2333562 command_runner.go:130] > # 	"containers_oom_count_total",
	I1006 02:35:24.313323 2333562 command_runner.go:130] > # ]
	I1006 02:35:24.313333 2333562 command_runner.go:130] > # The port on which the metrics server will listen.
	I1006 02:35:24.313355 2333562 command_runner.go:130] > # metrics_port = 9090
	I1006 02:35:24.313371 2333562 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1006 02:35:24.313378 2333562 command_runner.go:130] > # metrics_socket = ""
	I1006 02:35:24.313385 2333562 command_runner.go:130] > # The certificate for the secure metrics server.
	I1006 02:35:24.313405 2333562 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1006 02:35:24.313427 2333562 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1006 02:35:24.313440 2333562 command_runner.go:130] > # certificate on any modification event.
	I1006 02:35:24.313455 2333562 command_runner.go:130] > # metrics_cert = ""
	I1006 02:35:24.313467 2333562 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1006 02:35:24.313474 2333562 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1006 02:35:24.313483 2333562 command_runner.go:130] > # metrics_key = ""
	I1006 02:35:24.313491 2333562 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1006 02:35:24.313499 2333562 command_runner.go:130] > [crio.tracing]
	I1006 02:35:24.313506 2333562 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1006 02:35:24.313515 2333562 command_runner.go:130] > # enable_tracing = false
	I1006 02:35:24.313531 2333562 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1006 02:35:24.313544 2333562 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1006 02:35:24.313551 2333562 command_runner.go:130] > # Number of samples to collect per million spans.
	I1006 02:35:24.313556 2333562 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1006 02:35:24.313568 2333562 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1006 02:35:24.313573 2333562 command_runner.go:130] > [crio.stats]
	I1006 02:35:24.313590 2333562 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1006 02:35:24.313604 2333562 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1006 02:35:24.313610 2333562 command_runner.go:130] > # stats_collection_period = 0
	I1006 02:35:24.314079 2333562 command_runner.go:130] ! time="2023-10-06 02:35:24.288793569Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1006 02:35:24.314104 2333562 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1006 02:35:24.314216 2333562 cni.go:84] Creating CNI manager for ""
	I1006 02:35:24.314231 2333562 cni.go:136] 2 nodes found, recommending kindnet
	I1006 02:35:24.314241 2333562 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1006 02:35:24.314272 2333562 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-951739 NodeName:multinode-951739-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 02:35:24.314419 2333562 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-951739-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 02:35:24.314489 2333562 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-951739-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-951739 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1006 02:35:24.314571 2333562 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1006 02:35:24.324283 2333562 command_runner.go:130] > kubeadm
	I1006 02:35:24.324303 2333562 command_runner.go:130] > kubectl
	I1006 02:35:24.324308 2333562 command_runner.go:130] > kubelet
	I1006 02:35:24.325779 2333562 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 02:35:24.325865 2333562 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1006 02:35:24.336879 2333562 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1006 02:35:24.362929 2333562 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 02:35:24.385274 2333562 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1006 02:35:24.391432 2333562 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 02:35:24.405122 2333562 host.go:66] Checking if "multinode-951739" exists ...
	I1006 02:35:24.405344 2333562 config.go:182] Loaded profile config "multinode-951739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:35:24.405376 2333562 start.go:304] JoinCluster: &{Name:multinode-951739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-951739 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:35:24.405455 2333562 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1006 02:35:24.405506 2333562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739
	I1006 02:35:24.423880 2333562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35339 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739/id_rsa Username:docker}
	I1006 02:35:24.599189 2333562 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token qpvtc6.u0uo5uexkoalfv56 --discovery-token-ca-cert-hash sha256:80472cab419bab12b6f684b1710332d34cec39ebaa3946bc81d26724ddc88d38 
	I1006 02:35:24.599237 2333562 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1006 02:35:24.599266 2333562 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qpvtc6.u0uo5uexkoalfv56 --discovery-token-ca-cert-hash sha256:80472cab419bab12b6f684b1710332d34cec39ebaa3946bc81d26724ddc88d38 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-951739-m02"
	I1006 02:35:24.654789 2333562 command_runner.go:130] ! W1006 02:35:24.654398    1024 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1006 02:35:24.700405 2333562 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1006 02:35:24.804307 2333562 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 02:35:29.495358 2333562 command_runner.go:130] > [preflight] Running pre-flight checks
	I1006 02:35:29.495383 2333562 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1006 02:35:29.495392 2333562 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-aws
	I1006 02:35:29.495398 2333562 command_runner.go:130] > OS: Linux
	I1006 02:35:29.495404 2333562 command_runner.go:130] > CGROUPS_CPU: enabled
	I1006 02:35:29.495411 2333562 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1006 02:35:29.495418 2333562 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1006 02:35:29.495425 2333562 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1006 02:35:29.495431 2333562 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1006 02:35:29.495437 2333562 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1006 02:35:29.495443 2333562 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1006 02:35:29.495450 2333562 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1006 02:35:29.495457 2333562 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1006 02:35:29.495469 2333562 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1006 02:35:29.495479 2333562 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1006 02:35:29.495487 2333562 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 02:35:29.495496 2333562 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 02:35:29.495503 2333562 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1006 02:35:29.495512 2333562 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1006 02:35:29.495518 2333562 command_runner.go:130] > This node has joined the cluster:
	I1006 02:35:29.495526 2333562 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1006 02:35:29.495533 2333562 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1006 02:35:29.495542 2333562 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1006 02:35:29.495554 2333562 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qpvtc6.u0uo5uexkoalfv56 --discovery-token-ca-cert-hash sha256:80472cab419bab12b6f684b1710332d34cec39ebaa3946bc81d26724ddc88d38 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-951739-m02": (4.896274071s)
	I1006 02:35:29.495569 2333562 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1006 02:35:29.735150 2333562 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1006 02:35:29.735180 2333562 start.go:306] JoinCluster complete in 5.329803241s
	I1006 02:35:29.735191 2333562 cni.go:84] Creating CNI manager for ""
	I1006 02:35:29.735198 2333562 cni.go:136] 2 nodes found, recommending kindnet
	I1006 02:35:29.735255 2333562 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 02:35:29.745927 2333562 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1006 02:35:29.745949 2333562 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1006 02:35:29.745970 2333562 command_runner.go:130] > Device: 38h/56d	Inode: 1826972     Links: 1
	I1006 02:35:29.745979 2333562 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1006 02:35:29.745986 2333562 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1006 02:35:29.745992 2333562 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1006 02:35:29.745998 2333562 command_runner.go:130] > Change: 2023-10-06 02:11:32.600474282 +0000
	I1006 02:35:29.746004 2333562 command_runner.go:130] >  Birth: 2023-10-06 02:11:32.556475164 +0000
	I1006 02:35:29.746060 2333562 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1006 02:35:29.746073 2333562 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1006 02:35:29.772458 2333562 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 02:35:30.219458 2333562 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1006 02:35:30.224935 2333562 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1006 02:35:30.229279 2333562 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1006 02:35:30.248449 2333562 command_runner.go:130] > daemonset.apps/kindnet configured
	I1006 02:35:30.255270 2333562 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:35:30.255558 2333562 kapi.go:59] client config for multinode-951739: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.crt", KeyFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.key", CAFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a24a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 02:35:30.255906 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1006 02:35:30.255922 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:30.255932 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:30.255940 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:30.258627 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:30.258652 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:30.258660 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:30 GMT
	I1006 02:35:30.258667 2333562 round_trippers.go:580]     Audit-Id: b0050d80-e7f5-4c0b-b9a5-a5de2a6717d8
	I1006 02:35:30.258674 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:30.258680 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:30.258686 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:30.258693 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:30.258708 2333562 round_trippers.go:580]     Content-Length: 291
	I1006 02:35:30.258735 2333562 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c91a07be-980f-49fe-875d-9d42fad520cd","resourceVersion":"449","creationTimestamp":"2023-10-06T02:34:24Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1006 02:35:30.258844 2333562 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-951739" context rescaled to 1 replicas
	I1006 02:35:30.258875 2333562 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1006 02:35:30.261603 2333562 out.go:177] * Verifying Kubernetes components...
	I1006 02:35:30.263596 2333562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:35:30.279327 2333562 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:35:30.279605 2333562 kapi.go:59] client config for multinode-951739: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.crt", KeyFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/multinode-951739/client.key", CAFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a24a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 02:35:30.279906 2333562 node_ready.go:35] waiting up to 6m0s for node "multinode-951739-m02" to be "Ready" ...
	I1006 02:35:30.279976 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:30.279987 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:30.279997 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:30.280004 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:30.282452 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:30.282476 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:30.282489 2333562 round_trippers.go:580]     Audit-Id: 1572db1f-a65f-4899-a6b4-cfaaee6e636f
	I1006 02:35:30.282495 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:30.282501 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:30.282507 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:30.282513 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:30.282520 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:30 GMT
	I1006 02:35:30.282645 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"490","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5594 chars]
	I1006 02:35:30.283141 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:30.283155 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:30.283163 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:30.283170 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:30.285631 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:30.285650 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:30.285659 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:30.285665 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:30.285672 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:30.285679 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:30 GMT
	I1006 02:35:30.285697 2333562 round_trippers.go:580]     Audit-Id: e39e8234-5665-422d-94a8-79365002459f
	I1006 02:35:30.285704 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:30.285839 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"490","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5594 chars]
	I1006 02:35:30.786970 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:30.787011 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:30.787021 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:30.787028 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:30.789618 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:30.789640 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:30.789649 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:30 GMT
	I1006 02:35:30.789655 2333562 round_trippers.go:580]     Audit-Id: 248fdc10-78c8-4ab9-a433-bc3e96ac4066
	I1006 02:35:30.789662 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:30.789668 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:30.789675 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:30.789681 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:30.789802 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"490","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5594 chars]
	I1006 02:35:31.286441 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:31.286467 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:31.286477 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:31.286485 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:31.289093 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:31.289151 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:31.289172 2333562 round_trippers.go:580]     Audit-Id: 339b9f42-5d89-47ff-bd0c-d53bd79f3752
	I1006 02:35:31.289195 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:31.289224 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:31.289247 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:31.289274 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:31.289288 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:31 GMT
	I1006 02:35:31.289400 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:31.786456 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:31.786477 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:31.786489 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:31.786497 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:31.789263 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:31.789314 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:31.789327 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:31.789337 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:31.789344 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:31.789360 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:31.789390 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:31 GMT
	I1006 02:35:31.789400 2333562 round_trippers.go:580]     Audit-Id: 1530a48f-b25f-4251-889f-567348c29c71
	I1006 02:35:31.790006 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:32.286546 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:32.286567 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:32.286576 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:32.286583 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:32.289311 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:32.289338 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:32.289353 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:32.289359 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:32.289366 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:32.289372 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:32.289378 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:32 GMT
	I1006 02:35:32.289384 2333562 round_trippers.go:580]     Audit-Id: 7747babd-f231-4da2-abb7-afe3eaa3eeeb
	I1006 02:35:32.290536 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:32.290923 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:35:32.787262 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:32.787286 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:32.787295 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:32.787303 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:32.789874 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:32.789897 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:32.789906 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:32.789912 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:32.789919 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:32.789925 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:32 GMT
	I1006 02:35:32.789931 2333562 round_trippers.go:580]     Audit-Id: 227c427a-2946-4437-af0f-6c0e013a2edf
	I1006 02:35:32.789937 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:32.790207 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:33.286955 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:33.286975 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:33.286984 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:33.286992 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:33.289374 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:33.289399 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:33.289407 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:33.289414 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:33 GMT
	I1006 02:35:33.289420 2333562 round_trippers.go:580]     Audit-Id: 939b8abf-846d-4e6f-9253-d91dfc77eee7
	I1006 02:35:33.289431 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:33.289437 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:33.289444 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:33.289611 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:33.787236 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:33.787258 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:33.787268 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:33.787281 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:33.790090 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:33.790116 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:33.790129 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:33.790140 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:33 GMT
	I1006 02:35:33.790147 2333562 round_trippers.go:580]     Audit-Id: 65c3ac0f-e902-4798-b161-3dcd5061157b
	I1006 02:35:33.790157 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:33.790168 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:33.790177 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:33.790661 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:34.287419 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:34.287442 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:34.287452 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:34.287459 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:34.290048 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:34.290068 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:34.290076 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:34.290082 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:34 GMT
	I1006 02:35:34.290089 2333562 round_trippers.go:580]     Audit-Id: 28f3a181-1e6d-41a5-b5bd-e85f2021477d
	I1006 02:35:34.290095 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:34.290101 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:34.290107 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:34.290282 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:34.786713 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:34.786737 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:34.786756 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:34.786764 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:34.801878 2333562 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1006 02:35:34.801900 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:34.801909 2333562 round_trippers.go:580]     Audit-Id: 6bc68484-3a83-41fe-aec6-285da70b350d
	I1006 02:35:34.801915 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:34.801921 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:34.801927 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:34.801933 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:34.801940 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:34 GMT
	I1006 02:35:34.802097 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:34.802457 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:35:35.286387 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:35.286410 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:35.286420 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:35.286428 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:35.288953 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:35.288976 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:35.288983 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:35.288990 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:35.288996 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:35 GMT
	I1006 02:35:35.289002 2333562 round_trippers.go:580]     Audit-Id: 529fdb5c-119a-4a57-93fe-d6a79c7a8225
	I1006 02:35:35.289008 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:35.289014 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:35.289123 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:35.787190 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:35.787214 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:35.787223 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:35.787230 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:35.789873 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:35.789905 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:35.789913 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:35.789920 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:35 GMT
	I1006 02:35:35.789926 2333562 round_trippers.go:580]     Audit-Id: f7350546-a4b2-45e8-b510-84d83df95afb
	I1006 02:35:35.789932 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:35.789938 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:35.789950 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:35.790163 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:36.287303 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:36.287328 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:36.287338 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:36.287345 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:36.289784 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:36.289802 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:36.289810 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:36.289816 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:36.289823 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:36.289829 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:36 GMT
	I1006 02:35:36.289836 2333562 round_trippers.go:580]     Audit-Id: 6466d093-2eb7-46cd-9a47-4c74d52fc4d3
	I1006 02:35:36.289841 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:36.289975 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:36.787246 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:36.787270 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:36.787279 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:36.787287 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:36.789856 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:36.789887 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:36.789896 2333562 round_trippers.go:580]     Audit-Id: 59e40f11-96f0-44ae-881d-d6e805fda981
	I1006 02:35:36.789903 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:36.789909 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:36.789916 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:36.789936 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:36.789946 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:36 GMT
	I1006 02:35:36.790110 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:37.286594 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:37.286627 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:37.286638 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:37.286645 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:37.290133 2333562 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1006 02:35:37.290156 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:37.290165 2333562 round_trippers.go:580]     Audit-Id: 71c95149-587f-46ca-9983-8813cc0d76c3
	I1006 02:35:37.290171 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:37.290178 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:37.290184 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:37.290191 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:37.290198 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:37 GMT
	I1006 02:35:37.290589 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:37.291140 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:35:37.786485 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:37.786510 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:37.786520 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:37.786529 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:37.789184 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:37.789204 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:37.789213 2333562 round_trippers.go:580]     Audit-Id: 82f53864-b341-4392-97c9-c4f38b0235c8
	I1006 02:35:37.789220 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:37.789226 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:37.789232 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:37.789239 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:37.789245 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:37 GMT
	I1006 02:35:37.789618 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:38.287131 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:38.287158 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:38.287168 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:38.287176 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:38.289890 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:38.289910 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:38.289918 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:38.289924 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:38.289931 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:38.289937 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:38 GMT
	I1006 02:35:38.289943 2333562 round_trippers.go:580]     Audit-Id: 7888bfe0-6e9d-46e1-b2f3-d5d4215cc117
	I1006 02:35:38.289949 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:38.290070 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:38.787225 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:38.787252 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:38.787262 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:38.787270 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:38.789715 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:38.789737 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:38.789745 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:38.789752 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:38 GMT
	I1006 02:35:38.789758 2333562 round_trippers.go:580]     Audit-Id: 09c499e8-7fc0-41aa-ac0e-42c098ada42c
	I1006 02:35:38.789765 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:38.789771 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:38.789777 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:38.789900 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"504","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5703 chars]
	I1006 02:35:39.286875 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:39.286917 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:39.286928 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:39.286936 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:39.289544 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:39.289571 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:39.289583 2333562 round_trippers.go:580]     Audit-Id: ceb1aba1-025d-4b6f-808c-df87adca25c9
	I1006 02:35:39.289590 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:39.289601 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:39.289609 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:39.289616 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:39.289629 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:39 GMT
	I1006 02:35:39.289775 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:39.787388 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:39.787411 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:39.787421 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:39.787429 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:39.790014 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:39.790039 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:39.790053 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:39.790060 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:39 GMT
	I1006 02:35:39.790067 2333562 round_trippers.go:580]     Audit-Id: 96ba79bd-dee5-40b0-928d-3fcf12a1a511
	I1006 02:35:39.790073 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:39.790079 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:39.790085 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:39.790233 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:39.790627 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:35:40.286987 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:40.287011 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:40.287021 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:40.287056 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:40.289728 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:40.289750 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:40.289758 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:40.289767 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:40.289773 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:40 GMT
	I1006 02:35:40.289779 2333562 round_trippers.go:580]     Audit-Id: 898b77ee-4f37-4b26-9097-37217df88b45
	I1006 02:35:40.289785 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:40.289791 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:40.289909 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:40.787008 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:40.787032 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:40.787061 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:40.787069 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:40.789471 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:40.789507 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:40.789529 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:40.789539 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:40.789548 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:40.789570 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:40 GMT
	I1006 02:35:40.789583 2333562 round_trippers.go:580]     Audit-Id: 86383544-bd51-4a0e-a71a-98b211141fde
	I1006 02:35:40.789599 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:40.789785 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:41.286350 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:41.286376 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:41.286386 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:41.286394 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:41.288843 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:41.288866 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:41.288875 2333562 round_trippers.go:580]     Audit-Id: 22728f8d-786c-4e7a-ac18-bc4cacfc01d0
	I1006 02:35:41.288881 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:41.288888 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:41.288894 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:41.288900 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:41.288906 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:41 GMT
	I1006 02:35:41.289074 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:41.787029 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:41.787096 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:41.787106 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:41.787117 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:41.789649 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:41.789676 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:41.789684 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:41 GMT
	I1006 02:35:41.789691 2333562 round_trippers.go:580]     Audit-Id: 8ef47095-607e-41eb-ad8e-327182f9a55b
	I1006 02:35:41.789698 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:41.789704 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:41.789711 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:41.789718 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:41.789858 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:42.286992 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:42.287017 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:42.287028 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:42.287038 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:42.289630 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:42.289655 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:42.289663 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:42.289671 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:42.289677 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:42 GMT
	I1006 02:35:42.289684 2333562 round_trippers.go:580]     Audit-Id: 53d9b426-901d-4c03-bc35-a0a15cac3f30
	I1006 02:35:42.289691 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:42.289697 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:42.289964 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:42.290370 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:35:42.786681 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:42.786704 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:42.786713 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:42.786721 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:42.789289 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:42.789313 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:42.789322 2333562 round_trippers.go:580]     Audit-Id: f1ac7115-b5fc-4a02-915f-33ab041943ee
	I1006 02:35:42.789329 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:42.789335 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:42.789342 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:42.789348 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:42.789355 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:42 GMT
	I1006 02:35:42.789606 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:43.287312 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:43.287338 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:43.287348 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:43.287356 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:43.289751 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:43.289770 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:43.289778 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:43.289790 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:43.289796 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:43.289802 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:43 GMT
	I1006 02:35:43.289808 2333562 round_trippers.go:580]     Audit-Id: da3f8d6c-e0e1-4e5b-a657-a3eef42a44d0
	I1006 02:35:43.289815 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:43.289978 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:43.787094 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:43.787117 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:43.787126 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:43.787133 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:43.789530 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:43.789553 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:43.789571 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:43.789578 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:43.789585 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:43 GMT
	I1006 02:35:43.789594 2333562 round_trippers.go:580]     Audit-Id: 0bced5c7-8f9b-4b0f-a60f-b41737b31bb5
	I1006 02:35:43.789601 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:43.789610 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:43.789801 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:44.286953 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:44.286978 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:44.286993 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:44.287003 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:44.289594 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:44.289617 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:44.289626 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:44.289632 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:44.289639 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:44 GMT
	I1006 02:35:44.289646 2333562 round_trippers.go:580]     Audit-Id: 5befea03-0aa2-483d-aa0e-55da6ae45ee3
	I1006 02:35:44.289652 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:44.289658 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:44.289780 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:44.786939 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:44.786962 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:44.786973 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:44.786980 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:44.789497 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:44.789521 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:44.789529 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:44.789535 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:44 GMT
	I1006 02:35:44.789543 2333562 round_trippers.go:580]     Audit-Id: bdf9173e-5eac-4b89-8344-ea22346fb014
	I1006 02:35:44.789549 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:44.789565 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:44.789574 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:44.789755 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:44.790145 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:35:45.287270 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:45.287294 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:45.287304 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:45.287312 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:45.290431 2333562 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1006 02:35:45.290452 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:45.290461 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:45.290468 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:45.290474 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:45 GMT
	I1006 02:35:45.290481 2333562 round_trippers.go:580]     Audit-Id: 0405ae31-821e-49fb-afa5-479b8883b5d5
	I1006 02:35:45.290489 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:45.290496 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:45.290999 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:45.786965 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:45.786988 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:45.786998 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:45.787005 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:45.789397 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:45.789416 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:45.789424 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:45.789433 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:45 GMT
	I1006 02:35:45.789439 2333562 round_trippers.go:580]     Audit-Id: beb83d03-4d8c-4389-8a1e-2d2c871edcf8
	I1006 02:35:45.789446 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:45.789452 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:45.789458 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:45.789627 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:46.286663 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:46.286688 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:46.286698 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:46.286706 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:46.289324 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:46.289350 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:46.289358 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:46.289365 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:46.289372 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:46.289378 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:46.289385 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:46 GMT
	I1006 02:35:46.289392 2333562 round_trippers.go:580]     Audit-Id: 5dfc5fc0-d318-40d8-8fc3-b9df079ff2cf
	I1006 02:35:46.289761 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:46.787001 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:46.787025 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:46.787035 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:46.787064 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:46.789549 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:46.789569 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:46.789577 2333562 round_trippers.go:580]     Audit-Id: e2a1c10a-8071-43ae-b200-932545f1ca85
	I1006 02:35:46.789584 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:46.789590 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:46.789596 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:46.789603 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:46.789609 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:46 GMT
	I1006 02:35:46.789808 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:46.790195 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:35:47.287150 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:47.287174 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:47.287184 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:47.287192 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:47.289681 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:47.289702 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:47.289710 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:47.289717 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:47.289723 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:47.289729 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:47.289735 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:47 GMT
	I1006 02:35:47.289742 2333562 round_trippers.go:580]     Audit-Id: 020218ae-413a-4df8-b109-899622fd8120
	I1006 02:35:47.289845 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:47.786976 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:47.786999 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:47.787010 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:47.787017 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:47.789569 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:47.789590 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:47.789599 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:47 GMT
	I1006 02:35:47.789607 2333562 round_trippers.go:580]     Audit-Id: baf310a3-1215-4e57-88f1-8d4bad1697cc
	I1006 02:35:47.789613 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:47.789619 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:47.789626 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:47.789632 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:47.789748 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:48.286436 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:48.286456 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:48.286466 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:48.286477 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:48.288969 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:48.288991 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:48.288999 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:48.289006 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:48.289013 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:48 GMT
	I1006 02:35:48.289019 2333562 round_trippers.go:580]     Audit-Id: c114e6c9-63d5-4a3f-8ac6-234223a45bf3
	I1006 02:35:48.289025 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:48.289032 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:48.289147 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:48.787205 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:48.787228 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:48.787238 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:48.787246 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:48.789740 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:48.789769 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:48.789779 2333562 round_trippers.go:580]     Audit-Id: 46034489-b07d-4586-b624-d5eb5710685e
	I1006 02:35:48.789786 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:48.789793 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:48.789804 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:48.789811 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:48.789821 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:48 GMT
	I1006 02:35:48.790142 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:48.790526 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:35:49.286761 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:49.286784 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:49.286793 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:49.286800 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:49.289488 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:49.289512 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:49.289521 2333562 round_trippers.go:580]     Audit-Id: 70ea64d8-33e9-4fbc-9bd6-31dae1e18892
	I1006 02:35:49.289527 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:49.289536 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:49.289542 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:49.289548 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:49.289554 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:49 GMT
	I1006 02:35:49.289683 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:49.786696 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:49.786723 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:49.786734 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:49.786742 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:49.789204 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:49.789225 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:49.789233 2333562 round_trippers.go:580]     Audit-Id: 0841d187-3d47-4f39-8303-3f916c1a280f
	I1006 02:35:49.789239 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:49.789245 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:49.789251 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:49.789258 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:49.789265 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:49 GMT
	I1006 02:35:49.789383 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:50.286581 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:50.286604 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:50.286616 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:50.286624 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:50.289107 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:50.289128 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:50.289136 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:50.289143 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:50.289149 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:50 GMT
	I1006 02:35:50.289155 2333562 round_trippers.go:580]     Audit-Id: a1c38a14-f67e-419b-9e9d-5691a65adbce
	I1006 02:35:50.289161 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:50.289167 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:50.289311 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:50.786413 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:50.786437 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:50.786447 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:50.786460 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:50.788995 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:50.789021 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:50.789029 2333562 round_trippers.go:580]     Audit-Id: bb7f4917-54ed-46b0-a6f9-6ffa65248a01
	I1006 02:35:50.789036 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:50.789042 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:50.789049 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:50.789055 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:50.789064 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:50 GMT
	I1006 02:35:50.789203 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:51.287309 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:51.287335 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:51.287346 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:51.287353 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:51.289719 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:51.289742 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:51.289751 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:51.289757 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:51.289763 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:51.289770 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:51.289779 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:51 GMT
	I1006 02:35:51.289789 2333562 round_trippers.go:580]     Audit-Id: 0b47812e-5f6f-40f8-a82c-7f6be32d239f
	I1006 02:35:51.290105 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:51.290484 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:35:51.787204 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:51.787232 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:51.787241 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:51.787249 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:51.789713 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:51.789735 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:51.789743 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:51 GMT
	I1006 02:35:51.789752 2333562 round_trippers.go:580]     Audit-Id: 786b8b25-5c05-4abb-88df-f82edaec8a3f
	I1006 02:35:51.789758 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:51.789764 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:51.789770 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:51.789778 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:51.790106 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:52.287238 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:52.287258 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:52.287268 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:52.287275 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:52.289879 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:52.289954 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:52.289976 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:52 GMT
	I1006 02:35:52.290000 2333562 round_trippers.go:580]     Audit-Id: 527084a3-7069-40c0-a868-fcd00b151479
	I1006 02:35:52.290032 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:52.290085 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:52.290105 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:52.290135 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:52.290275 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:52.786732 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:52.786756 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:52.786766 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:52.786774 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:52.789368 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:52.789392 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:52.789400 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:52.789407 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:52.789413 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:52 GMT
	I1006 02:35:52.789419 2333562 round_trippers.go:580]     Audit-Id: 558e582d-baa7-40e0-b246-4726676fb4d6
	I1006 02:35:52.789426 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:52.789433 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:52.789620 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:53.286719 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:53.286751 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:53.286761 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:53.286769 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:53.289293 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:53.289369 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:53.289385 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:53.289393 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:53.289400 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:53.289406 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:53.289436 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:53 GMT
	I1006 02:35:53.289449 2333562 round_trippers.go:580]     Audit-Id: 46022319-7af5-4bba-9dc5-c862704f53ad
	I1006 02:35:53.289551 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:53.786993 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:53.787016 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:53.787026 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:53.787034 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:53.789597 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:53.789694 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:53.789712 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:53 GMT
	I1006 02:35:53.789732 2333562 round_trippers.go:580]     Audit-Id: bd57346f-d154-4f23-851d-7dfe62754778
	I1006 02:35:53.789742 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:53.789748 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:53.789754 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:53.789764 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:53.789880 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:53.790288 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:35:54.287080 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:54.287105 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:54.287115 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:54.287123 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:54.289533 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:54.289554 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:54.289563 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:54.289569 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:54.289581 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:54.289590 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:54.289600 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:54 GMT
	I1006 02:35:54.289606 2333562 round_trippers.go:580]     Audit-Id: bee27cdb-c96b-43a7-a887-c6a3c5fd46df
	I1006 02:35:54.289951 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:54.786812 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:54.786838 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:54.786848 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:54.786855 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:54.789448 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:54.789469 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:54.789477 2333562 round_trippers.go:580]     Audit-Id: 5055f266-e81a-42a8-90ff-32ca79b798b6
	I1006 02:35:54.789483 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:54.789489 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:54.789496 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:54.789502 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:54.789508 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:54 GMT
	I1006 02:35:54.789652 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:55.286963 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:55.286988 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:55.286998 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:55.287006 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:55.289470 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:55.289494 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:55.289502 2333562 round_trippers.go:580]     Audit-Id: ae53d3df-4520-417c-8772-c6f4a52cbbbf
	I1006 02:35:55.289509 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:55.289515 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:55.289521 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:55.289532 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:55.289542 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:55 GMT
	I1006 02:35:55.289791 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:55.786447 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:55.786469 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:55.786479 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:55.786487 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:55.789126 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:55.789152 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:55.789162 2333562 round_trippers.go:580]     Audit-Id: 97145cbc-d33a-4297-b6f4-eb621667e97a
	I1006 02:35:55.789168 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:55.789175 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:55.789181 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:55.789187 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:55.789197 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:55 GMT
	I1006 02:35:55.789441 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:56.286449 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:56.286474 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:56.286484 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:56.286491 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:56.289223 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:56.289245 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:56.289253 2333562 round_trippers.go:580]     Audit-Id: 3798004c-4ab0-43bb-970a-82911e9a5a75
	I1006 02:35:56.289259 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:56.289266 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:56.289272 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:56.289278 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:56.289285 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:56 GMT
	I1006 02:35:56.289409 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:56.289791 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:35:56.786483 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:56.786510 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:56.786521 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:56.786529 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:56.789072 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:56.789105 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:56.789113 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:56 GMT
	I1006 02:35:56.789120 2333562 round_trippers.go:580]     Audit-Id: 6128feed-00c1-489e-b955-a4517430625d
	I1006 02:35:56.789126 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:56.789132 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:56.789139 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:56.789145 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:56.789272 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:57.287329 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:57.287352 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:57.287362 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:57.287369 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:57.289789 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:57.289816 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:57.289824 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:57 GMT
	I1006 02:35:57.289830 2333562 round_trippers.go:580]     Audit-Id: a3da3299-90a2-4332-98b1-519c0c8d38db
	I1006 02:35:57.289837 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:57.289843 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:57.289849 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:57.289855 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:57.289967 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:57.786768 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:57.786790 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:57.786801 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:57.786808 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:57.789950 2333562 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1006 02:35:57.789982 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:57.790014 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:57.790021 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:57 GMT
	I1006 02:35:57.790028 2333562 round_trippers.go:580]     Audit-Id: d81ce92f-c118-427c-906d-64007d90cedf
	I1006 02:35:57.790058 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:57.790065 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:57.790071 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:57.790197 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:58.287105 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:58.287129 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:58.287138 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:58.287146 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:58.289593 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:58.289613 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:58.289622 2333562 round_trippers.go:580]     Audit-Id: fdac9ff2-b457-4866-99f9-bee7287f95fe
	I1006 02:35:58.289628 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:58.289635 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:58.289641 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:58.289647 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:58.289654 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:58 GMT
	I1006 02:35:58.289805 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:58.290186 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:35:58.786410 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:58.786434 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:58.786444 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:58.786451 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:58.789033 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:58.789053 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:58.789061 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:58.789073 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:58.789080 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:58 GMT
	I1006 02:35:58.789086 2333562 round_trippers.go:580]     Audit-Id: cbaf3018-8a2b-4510-bcf2-2d68924abb75
	I1006 02:35:58.789092 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:58.789098 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:58.789438 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:59.287147 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:59.287171 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:59.287181 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:59.287190 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:59.289774 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:59.289798 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:59.289805 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:59.289812 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:59.289818 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:59.289824 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:59 GMT
	I1006 02:35:59.289831 2333562 round_trippers.go:580]     Audit-Id: 9126066f-c51d-4a37-8ad1-67664a1989d8
	I1006 02:35:59.289837 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:59.289968 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:35:59.787061 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:35:59.787085 2333562 round_trippers.go:469] Request Headers:
	I1006 02:35:59.787095 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:35:59.787102 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:35:59.789504 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:35:59.789526 2333562 round_trippers.go:577] Response Headers:
	I1006 02:35:59.789537 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:35:59.789543 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:35:59.789551 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:35:59 GMT
	I1006 02:35:59.789566 2333562 round_trippers.go:580]     Audit-Id: 4b543bad-43d5-4068-ba34-b19867ec870b
	I1006 02:35:59.789573 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:35:59.789580 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:35:59.789747 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:36:00.286749 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:36:00.286776 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:00.286787 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:00.286794 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:00.289775 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:00.289810 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:00.289819 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:00.289826 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:00.289832 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:00 GMT
	I1006 02:36:00.289839 2333562 round_trippers.go:580]     Audit-Id: afbfcb18-21ca-4034-ac24-a23bc24c6720
	I1006 02:36:00.289845 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:00.289853 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:00.290241 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"513","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1006 02:36:00.290680 2333562 node_ready.go:58] node "multinode-951739-m02" has status "Ready":"False"
	I1006 02:36:00.786636 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:36:00.786661 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:00.786671 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:00.786678 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:00.789134 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:00.789157 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:00.789166 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:00 GMT
	I1006 02:36:00.789172 2333562 round_trippers.go:580]     Audit-Id: 980b18cc-b6c2-40f4-9704-700a1910f240
	I1006 02:36:00.789180 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:00.789186 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:00.789194 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:00.789203 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:00.789634 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"535","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1006 02:36:00.790031 2333562 node_ready.go:49] node "multinode-951739-m02" has status "Ready":"True"
	I1006 02:36:00.790048 2333562 node_ready.go:38] duration metric: took 30.510123232s waiting for node "multinode-951739-m02" to be "Ready" ...
	I1006 02:36:00.790058 2333562 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:36:00.790127 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1006 02:36:00.790132 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:00.790140 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:00.790147 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:00.793699 2333562 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1006 02:36:00.793718 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:00.793726 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:00.793733 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:00.793739 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:00 GMT
	I1006 02:36:00.793745 2333562 round_trippers.go:580]     Audit-Id: b70af5e7-16bb-419f-a00d-1b4fde1a7f78
	I1006 02:36:00.793751 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:00.793757 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:00.794451 2333562 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"535"},"items":[{"metadata":{"name":"coredns-5dd5756b68-tswm4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"82def565-623c-4885-b2a3-87c5302c1841","resourceVersion":"445","creationTimestamp":"2023-10-06T02:34:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"61adee40-8c8c-4b55-82f7-ad79fd22292d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61adee40-8c8c-4b55-82f7-ad79fd22292d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1006 02:36:00.797337 2333562 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tswm4" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:00.797429 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-tswm4
	I1006 02:36:00.797441 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:00.797450 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:00.797457 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:00.800070 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:00.800097 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:00.800105 2333562 round_trippers.go:580]     Audit-Id: 452bbe5f-23f0-4528-a9f5-b136861a1122
	I1006 02:36:00.800111 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:00.800118 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:00.800124 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:00.800130 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:00.800140 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:00 GMT
	I1006 02:36:00.800396 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-tswm4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"82def565-623c-4885-b2a3-87c5302c1841","resourceVersion":"445","creationTimestamp":"2023-10-06T02:34:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"61adee40-8c8c-4b55-82f7-ad79fd22292d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61adee40-8c8c-4b55-82f7-ad79fd22292d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1006 02:36:00.800922 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:36:00.800938 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:00.800947 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:00.800957 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:00.803224 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:00.803245 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:00.803253 2333562 round_trippers.go:580]     Audit-Id: 3b5b1ce6-257d-44f3-b5f4-219b68091941
	I1006 02:36:00.803280 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:00.803294 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:00.803301 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:00.803311 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:00.803321 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:00 GMT
	I1006 02:36:00.803475 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:36:00.803917 2333562 pod_ready.go:92] pod "coredns-5dd5756b68-tswm4" in "kube-system" namespace has status "Ready":"True"
	I1006 02:36:00.803936 2333562 pod_ready.go:81] duration metric: took 6.569952ms waiting for pod "coredns-5dd5756b68-tswm4" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:00.803958 2333562 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:00.804022 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-951739
	I1006 02:36:00.804035 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:00.804043 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:00.804050 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:00.806243 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:00.806265 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:00.806273 2333562 round_trippers.go:580]     Audit-Id: b762326c-11f2-4dd9-b847-b2bdefd275c9
	I1006 02:36:00.806279 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:00.806286 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:00.806292 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:00.806298 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:00.806307 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:00 GMT
	I1006 02:36:00.806421 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-951739","namespace":"kube-system","uid":"bef22c05-be2f-4ea4-822d-2eba636c713e","resourceVersion":"418","creationTimestamp":"2023-10-06T02:34:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"af5e192498cd67a5eafe7312fdcb281d","kubernetes.io/config.mirror":"af5e192498cd67a5eafe7312fdcb281d","kubernetes.io/config.seen":"2023-10-06T02:34:24.422905048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1006 02:36:00.806870 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:36:00.806886 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:00.806893 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:00.806900 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:00.809155 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:00.809205 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:00.809245 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:00.809269 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:00.809288 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:00.809310 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:00 GMT
	I1006 02:36:00.809342 2333562 round_trippers.go:580]     Audit-Id: 133e3035-4f53-42d1-8dee-fea7c53c29ea
	I1006 02:36:00.809366 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:00.809530 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:36:00.809930 2333562 pod_ready.go:92] pod "etcd-multinode-951739" in "kube-system" namespace has status "Ready":"True"
	I1006 02:36:00.809947 2333562 pod_ready.go:81] duration metric: took 5.976472ms waiting for pod "etcd-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:00.809964 2333562 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:00.810024 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-951739
	I1006 02:36:00.810035 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:00.810043 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:00.810050 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:00.813068 2333562 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1006 02:36:00.813154 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:00.813172 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:00.813179 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:00 GMT
	I1006 02:36:00.813186 2333562 round_trippers.go:580]     Audit-Id: 5f62df3d-f8b8-4c1c-a789-0b245d8ec70a
	I1006 02:36:00.813207 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:00.813221 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:00.813228 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:00.813357 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-951739","namespace":"kube-system","uid":"7129e4a8-1667-4441-b00d-5e0f59264803","resourceVersion":"357","creationTimestamp":"2023-10-06T02:34:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"3f4141b08fc9414f47ccfd58153cd186","kubernetes.io/config.mirror":"3f4141b08fc9414f47ccfd58153cd186","kubernetes.io/config.seen":"2023-10-06T02:34:24.422911111Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1006 02:36:00.813890 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:36:00.813907 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:00.813914 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:00.813921 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:00.816107 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:00.816130 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:00.816138 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:00.816145 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:00.816151 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:00 GMT
	I1006 02:36:00.816157 2333562 round_trippers.go:580]     Audit-Id: 6da2b5b0-202a-4f78-bff5-38d478b1476a
	I1006 02:36:00.816164 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:00.816174 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:00.816291 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:36:00.816673 2333562 pod_ready.go:92] pod "kube-apiserver-multinode-951739" in "kube-system" namespace has status "Ready":"True"
	I1006 02:36:00.816692 2333562 pod_ready.go:81] duration metric: took 6.716379ms waiting for pod "kube-apiserver-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:00.816704 2333562 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:00.816767 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-951739
	I1006 02:36:00.816778 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:00.816787 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:00.816794 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:00.819221 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:00.819258 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:00.819267 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:00.819274 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:00 GMT
	I1006 02:36:00.819280 2333562 round_trippers.go:580]     Audit-Id: 3cd52735-076e-4cca-9d9b-d31fb87e0856
	I1006 02:36:00.819286 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:00.819292 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:00.819301 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:00.819681 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-951739","namespace":"kube-system","uid":"8309b551-13a7-4115-a9a7-8e1f482fbdf4","resourceVersion":"419","creationTimestamp":"2023-10-06T02:34:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"39065bcc46cf04349a0afe0158652bd4","kubernetes.io/config.mirror":"39065bcc46cf04349a0afe0158652bd4","kubernetes.io/config.seen":"2023-10-06T02:34:24.422912465Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1006 02:36:00.820237 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:36:00.820254 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:00.820263 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:00.820270 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:00.822702 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:00.822776 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:00.822810 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:00.822819 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:00 GMT
	I1006 02:36:00.822837 2333562 round_trippers.go:580]     Audit-Id: f3811f3d-f2e8-4faf-82b6-f624f6c5869b
	I1006 02:36:00.822852 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:00.822886 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:00.822900 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:00.823076 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:36:00.823588 2333562 pod_ready.go:92] pod "kube-controller-manager-multinode-951739" in "kube-system" namespace has status "Ready":"True"
	I1006 02:36:00.823610 2333562 pod_ready.go:81] duration metric: took 6.895759ms waiting for pod "kube-controller-manager-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:00.823623 2333562 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7jkqc" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:00.987032 2333562 request.go:629] Waited for 163.306108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jkqc
	I1006 02:36:00.987117 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7jkqc
	I1006 02:36:00.987126 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:00.987136 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:00.987147 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:00.989923 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:00.989998 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:00.990020 2333562 round_trippers.go:580]     Audit-Id: 5a316617-e9ca-473c-9505-9ed8cc8dbb34
	I1006 02:36:00.990044 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:00.990084 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:00.990114 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:00.990137 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:00.990163 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:00 GMT
	I1006 02:36:00.990396 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7jkqc","generateName":"kube-proxy-","namespace":"kube-system","uid":"df661cc8-4197-4da3-819b-62333fe39c94","resourceVersion":"499","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4ad46367-bdfb-4340-af54-8507ab3db445","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ad46367-bdfb-4340-af54-8507ab3db445\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1006 02:36:01.187339 2333562 request.go:629] Waited for 196.324397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:36:01.187399 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739-m02
	I1006 02:36:01.187406 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:01.187415 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:01.187426 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:01.190296 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:01.190319 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:01.190328 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:01.190334 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:01.190341 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:01 GMT
	I1006 02:36:01.190354 2333562 round_trippers.go:580]     Audit-Id: 029adfcb-bfa0-4ba0-b160-14c28f1555f6
	I1006 02:36:01.190361 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:01.190367 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:01.190486 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739-m02","uid":"3a067b4e-2053-4477-8871-fadffe296805","resourceVersion":"536","creationTimestamp":"2023-10-06T02:35:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:35:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5258 chars]
	I1006 02:36:01.190878 2333562 pod_ready.go:92] pod "kube-proxy-7jkqc" in "kube-system" namespace has status "Ready":"True"
	I1006 02:36:01.190896 2333562 pod_ready.go:81] duration metric: took 367.250763ms waiting for pod "kube-proxy-7jkqc" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:01.190906 2333562 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lwrtj" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:01.387353 2333562 request.go:629] Waited for 196.362313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwrtj
	I1006 02:36:01.387414 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwrtj
	I1006 02:36:01.387420 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:01.387429 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:01.387440 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:01.390162 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:01.390192 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:01.390202 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:01.390210 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:01 GMT
	I1006 02:36:01.390224 2333562 round_trippers.go:580]     Audit-Id: 9ea9f314-68fb-4d85-9050-99ef463dc8d3
	I1006 02:36:01.390232 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:01.390239 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:01.390248 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:01.390570 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lwrtj","generateName":"kube-proxy-","namespace":"kube-system","uid":"a24c85d4-5722-49cd-bfd9-adc611cca199","resourceVersion":"414","creationTimestamp":"2023-10-06T02:34:36Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4ad46367-bdfb-4340-af54-8507ab3db445","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ad46367-bdfb-4340-af54-8507ab3db445\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1006 02:36:01.587455 2333562 request.go:629] Waited for 196.318309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:36:01.587516 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:36:01.587526 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:01.587535 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:01.587546 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:01.590146 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:01.590261 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:01.590297 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:01.590306 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:01.590313 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:01.590320 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:01 GMT
	I1006 02:36:01.590344 2333562 round_trippers.go:580]     Audit-Id: dfb5bed0-c444-493a-a8e3-4c4c3deaadd4
	I1006 02:36:01.590357 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:01.590487 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:36:01.590907 2333562 pod_ready.go:92] pod "kube-proxy-lwrtj" in "kube-system" namespace has status "Ready":"True"
	I1006 02:36:01.590926 2333562 pod_ready.go:81] duration metric: took 400.012678ms waiting for pod "kube-proxy-lwrtj" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:01.590937 2333562 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:01.787314 2333562 request.go:629] Waited for 196.311992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-951739
	I1006 02:36:01.787391 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-951739
	I1006 02:36:01.787402 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:01.787411 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:01.787419 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:01.790115 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:01.790143 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:01.790152 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:01.790159 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:01 GMT
	I1006 02:36:01.790172 2333562 round_trippers.go:580]     Audit-Id: 6a856631-d39b-4f74-a249-b32f772db83a
	I1006 02:36:01.790181 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:01.790187 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:01.790193 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:01.790317 2333562 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-951739","namespace":"kube-system","uid":"72163d36-23a0-4b32-b6bb-8c79dc9145b6","resourceVersion":"347","creationTimestamp":"2023-10-06T02:34:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c53f7d1bf40c93d218c2763d6e42215d","kubernetes.io/config.mirror":"c53f7d1bf40c93d218c2763d6e42215d","kubernetes.io/config.seen":"2023-10-06T02:34:24.422913417Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-06T02:34:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1006 02:36:01.987153 2333562 request.go:629] Waited for 196.366949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:36:01.987230 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-951739
	I1006 02:36:01.987236 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:01.987246 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:01.987258 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:01.997060 2333562 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1006 02:36:01.997103 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:01.997113 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:01.997120 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:01.997127 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:01 GMT
	I1006 02:36:01.997134 2333562 round_trippers.go:580]     Audit-Id: 7aacb5ee-d378-481e-9e95-4ccba24ba8ed
	I1006 02:36:01.997140 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:01.997147 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:01.997297 2333562 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-06T02:34:20Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1006 02:36:01.997725 2333562 pod_ready.go:92] pod "kube-scheduler-multinode-951739" in "kube-system" namespace has status "Ready":"True"
	I1006 02:36:01.997744 2333562 pod_ready.go:81] duration metric: took 406.798882ms waiting for pod "kube-scheduler-multinode-951739" in "kube-system" namespace to be "Ready" ...
	I1006 02:36:01.997759 2333562 pod_ready.go:38] duration metric: took 1.207690266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:36:01.997777 2333562 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 02:36:01.997848 2333562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:36:02.013635 2333562 system_svc.go:56] duration metric: took 15.846139ms WaitForService to wait for kubelet.
	I1006 02:36:02.013716 2333562 kubeadm.go:581] duration metric: took 31.754811546s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1006 02:36:02.013766 2333562 node_conditions.go:102] verifying NodePressure condition ...
	I1006 02:36:02.187143 2333562 request.go:629] Waited for 173.257094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1006 02:36:02.187203 2333562 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1006 02:36:02.187210 2333562 round_trippers.go:469] Request Headers:
	I1006 02:36:02.187219 2333562 round_trippers.go:473]     Accept: application/json, */*
	I1006 02:36:02.187232 2333562 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1006 02:36:02.190146 2333562 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1006 02:36:02.190167 2333562 round_trippers.go:577] Response Headers:
	I1006 02:36:02.190187 2333562 round_trippers.go:580]     Audit-Id: 2a6c5368-8e09-443a-a790-526ec650d7c4
	I1006 02:36:02.190193 2333562 round_trippers.go:580]     Cache-Control: no-cache, private
	I1006 02:36:02.190200 2333562 round_trippers.go:580]     Content-Type: application/json
	I1006 02:36:02.190206 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 44265d3f-04d2-4ce2-b0b3-c7c1c4b80f4a
	I1006 02:36:02.190212 2333562 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5f9538-c754-48f7-b612-4645cd52ada3
	I1006 02:36:02.190219 2333562 round_trippers.go:580]     Date: Fri, 06 Oct 2023 02:36:02 GMT
	I1006 02:36:02.190400 2333562 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"537"},"items":[{"metadata":{"name":"multinode-951739","uid":"9e7a51e5-3820-4cb0-a795-9b89c01cc7c4","resourceVersion":"429","creationTimestamp":"2023-10-06T02:34:21Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-951739","kubernetes.io/os":"linux","minikube.k8s.io/commit":"84890cb24d0240d9d992d7c7712ee519ceed4154","minikube.k8s.io/name":"multinode-951739","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_06T02_34_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12332 chars]
	I1006 02:36:02.191090 2333562 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 02:36:02.191105 2333562 node_conditions.go:123] node cpu capacity is 2
	I1006 02:36:02.191115 2333562 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 02:36:02.191121 2333562 node_conditions.go:123] node cpu capacity is 2
	I1006 02:36:02.191127 2333562 node_conditions.go:105] duration metric: took 177.349272ms to run NodePressure ...
	I1006 02:36:02.191138 2333562 start.go:228] waiting for startup goroutines ...
	I1006 02:36:02.191161 2333562 start.go:242] writing updated cluster config ...
	I1006 02:36:02.191471 2333562 ssh_runner.go:195] Run: rm -f paused
	I1006 02:36:02.254829 2333562 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1006 02:36:02.258161 2333562 out.go:177] * Done! kubectl is now configured to use "multinode-951739" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 06 02:35:09 multinode-951739 crio[906]: time="2023-10-06 02:35:09.107279208Z" level=info msg="Starting container: 381a3376b3af19cb9b4cce3f5901a0a3339b497cb418e315cb480b641facff20" id=4d44f504-6b6e-46ac-9373-6d32b0b47add name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 02:35:09 multinode-951739 crio[906]: time="2023-10-06 02:35:09.113245491Z" level=info msg="Created container 5ced0f69504500e9b4ac5d09c876fac760784ba7f6c19bd54b366fca89529a07: kube-system/coredns-5dd5756b68-tswm4/coredns" id=7e74b871-6bde-47bb-ae0c-58646d010045 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:35:09 multinode-951739 crio[906]: time="2023-10-06 02:35:09.113981641Z" level=info msg="Starting container: 5ced0f69504500e9b4ac5d09c876fac760784ba7f6c19bd54b366fca89529a07" id=3eff6cba-4a6e-49c3-b13b-b6ab039e58b7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 02:35:09 multinode-951739 crio[906]: time="2023-10-06 02:35:09.129932443Z" level=info msg="Started container" PID=1934 containerID=5ced0f69504500e9b4ac5d09c876fac760784ba7f6c19bd54b366fca89529a07 description=kube-system/coredns-5dd5756b68-tswm4/coredns id=3eff6cba-4a6e-49c3-b13b-b6ab039e58b7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=019d2a5a4ae49d599d880030633205c27a900ac3fcfdcf2872914ffe0d22362a
	Oct 06 02:35:09 multinode-951739 crio[906]: time="2023-10-06 02:35:09.130452825Z" level=info msg="Started container" PID=1932 containerID=381a3376b3af19cb9b4cce3f5901a0a3339b497cb418e315cb480b641facff20 description=kube-system/storage-provisioner/storage-provisioner id=4d44f504-6b6e-46ac-9373-6d32b0b47add name=/runtime.v1.RuntimeService/StartContainer sandboxID=f50a1fd9a10673f783274763c9bb56e80cd99b85787ce69db807ee4eb2390338
	Oct 06 02:36:04 multinode-951739 crio[906]: time="2023-10-06 02:36:04.987700076Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-z7b7t/POD" id=5a9dd437-fc42-48b2-8011-476e91de363e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 02:36:04 multinode-951739 crio[906]: time="2023-10-06 02:36:04.987768801Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 06 02:36:05 multinode-951739 crio[906]: time="2023-10-06 02:36:05.013905692Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-z7b7t Namespace:default ID:72e7493930118c11506162250ff871b7c1f04cece0a65d835d9abc7c49b4aa30 UID:13da5d37-0236-42ca-a852-eeecfae7de4f NetNS:/var/run/netns/a159f3fe-154f-48db-a66b-12d72efea64a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 06 02:36:05 multinode-951739 crio[906]: time="2023-10-06 02:36:05.013952346Z" level=info msg="Adding pod default_busybox-5bc68d56bd-z7b7t to CNI network \"kindnet\" (type=ptp)"
	Oct 06 02:36:05 multinode-951739 crio[906]: time="2023-10-06 02:36:05.025131263Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-z7b7t Namespace:default ID:72e7493930118c11506162250ff871b7c1f04cece0a65d835d9abc7c49b4aa30 UID:13da5d37-0236-42ca-a852-eeecfae7de4f NetNS:/var/run/netns/a159f3fe-154f-48db-a66b-12d72efea64a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 06 02:36:05 multinode-951739 crio[906]: time="2023-10-06 02:36:05.025283590Z" level=info msg="Checking pod default_busybox-5bc68d56bd-z7b7t for CNI network kindnet (type=ptp)"
	Oct 06 02:36:05 multinode-951739 crio[906]: time="2023-10-06 02:36:05.041566345Z" level=info msg="Ran pod sandbox 72e7493930118c11506162250ff871b7c1f04cece0a65d835d9abc7c49b4aa30 with infra container: default/busybox-5bc68d56bd-z7b7t/POD" id=5a9dd437-fc42-48b2-8011-476e91de363e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 06 02:36:05 multinode-951739 crio[906]: time="2023-10-06 02:36:05.042489965Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=cd592287-6be4-4da0-9304-d745ee69a275 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 02:36:05 multinode-951739 crio[906]: time="2023-10-06 02:36:05.042716154Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=cd592287-6be4-4da0-9304-d745ee69a275 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 02:36:05 multinode-951739 crio[906]: time="2023-10-06 02:36:05.043852252Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=ae672017-e0c2-42cf-a610-2243512bdec6 name=/runtime.v1.ImageService/PullImage
	Oct 06 02:36:05 multinode-951739 crio[906]: time="2023-10-06 02:36:05.045025740Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 06 02:36:05 multinode-951739 crio[906]: time="2023-10-06 02:36:05.697860085Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 06 02:36:06 multinode-951739 crio[906]: time="2023-10-06 02:36:06.918670557Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=ae672017-e0c2-42cf-a610-2243512bdec6 name=/runtime.v1.ImageService/PullImage
	Oct 06 02:36:06 multinode-951739 crio[906]: time="2023-10-06 02:36:06.919804636Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=1ce1d28d-1349-41ba-b880-cc9ba0f8bb28 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 02:36:06 multinode-951739 crio[906]: time="2023-10-06 02:36:06.920598550Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1ce1d28d-1349-41ba-b880-cc9ba0f8bb28 name=/runtime.v1.ImageService/ImageStatus
	Oct 06 02:36:06 multinode-951739 crio[906]: time="2023-10-06 02:36:06.922288203Z" level=info msg="Creating container: default/busybox-5bc68d56bd-z7b7t/busybox" id=a7efc5bc-a411-4765-bc31-5f0d2be6abfc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:36:06 multinode-951739 crio[906]: time="2023-10-06 02:36:06.922604623Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 06 02:36:06 multinode-951739 crio[906]: time="2023-10-06 02:36:06.996080078Z" level=info msg="Created container ce24f4a15e6b9a693d402fd6c4e1a4f35f374bf7e53cff76099f66b49679a7ca: default/busybox-5bc68d56bd-z7b7t/busybox" id=a7efc5bc-a411-4765-bc31-5f0d2be6abfc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:36:06 multinode-951739 crio[906]: time="2023-10-06 02:36:06.996653456Z" level=info msg="Starting container: ce24f4a15e6b9a693d402fd6c4e1a4f35f374bf7e53cff76099f66b49679a7ca" id=a954d10f-f1b5-461b-94af-c3a7d1771d5a name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 02:36:07 multinode-951739 crio[906]: time="2023-10-06 02:36:07.007217601Z" level=info msg="Started container" PID=2083 containerID=ce24f4a15e6b9a693d402fd6c4e1a4f35f374bf7e53cff76099f66b49679a7ca description=default/busybox-5bc68d56bd-z7b7t/busybox id=a954d10f-f1b5-461b-94af-c3a7d1771d5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=72e7493930118c11506162250ff871b7c1f04cece0a65d835d9abc7c49b4aa30
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ce24f4a15e6b9       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   72e7493930118       busybox-5bc68d56bd-z7b7t
	5ced0f6950450       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      About a minute ago   Running             coredns                   0                   019d2a5a4ae49       coredns-5dd5756b68-tswm4
	381a3376b3af1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      About a minute ago   Running             storage-provisioner       0                   f50a1fd9a1067       storage-provisioner
	d4cfb59486e3a       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                      About a minute ago   Running             kube-proxy                0                   c6cc333df98ea       kube-proxy-lwrtj
	c96ef03c71659       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   ef3d818b5fbb2       kindnet-6r6sg
	2ddc38a919ab7       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                      About a minute ago   Running             kube-scheduler            0                   4b504f518dcef       kube-scheduler-multinode-951739
	58e9b6192941e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   eaaf288b28bbe       etcd-multinode-951739
	c4c99a6217e71       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                      About a minute ago   Running             kube-controller-manager   0                   496859092b2c2       kube-controller-manager-multinode-951739
	10d6403370796       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                      About a minute ago   Running             kube-apiserver            0                   307191acd3551       kube-apiserver-multinode-951739
	
	* 
	* ==> coredns [5ced0f69504500e9b4ac5d09c876fac760784ba7f6c19bd54b366fca89529a07] <==
	* [INFO] 10.244.1.2:37411 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121935s
	[INFO] 10.244.0.3:44406 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099675s
	[INFO] 10.244.0.3:42706 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001154814s
	[INFO] 10.244.0.3:34577 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076808s
	[INFO] 10.244.0.3:38036 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006121s
	[INFO] 10.244.0.3:54353 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000974778s
	[INFO] 10.244.0.3:59136 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061883s
	[INFO] 10.244.0.3:35770 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000040016s
	[INFO] 10.244.0.3:52179 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062325s
	[INFO] 10.244.1.2:51402 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010176s
	[INFO] 10.244.1.2:49299 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104959s
	[INFO] 10.244.1.2:40234 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082904s
	[INFO] 10.244.1.2:53754 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105115s
	[INFO] 10.244.0.3:44264 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132356s
	[INFO] 10.244.0.3:45082 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084766s
	[INFO] 10.244.0.3:35224 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008658s
	[INFO] 10.244.0.3:47552 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071687s
	[INFO] 10.244.1.2:40147 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169894s
	[INFO] 10.244.1.2:44191 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158357s
	[INFO] 10.244.1.2:40092 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000151014s
	[INFO] 10.244.1.2:55300 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130928s
	[INFO] 10.244.0.3:39792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010701s
	[INFO] 10.244.0.3:36124 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000067676s
	[INFO] 10.244.0.3:52462 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071712s
	[INFO] 10.244.0.3:34750 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005792s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-951739
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-951739
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=84890cb24d0240d9d992d7c7712ee519ceed4154
	                    minikube.k8s.io/name=multinode-951739
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_06T02_34_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Oct 2023 02:34:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-951739
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Oct 2023 02:36:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Oct 2023 02:35:08 +0000   Fri, 06 Oct 2023 02:34:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Oct 2023 02:35:08 +0000   Fri, 06 Oct 2023 02:34:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Oct 2023 02:35:08 +0000   Fri, 06 Oct 2023 02:34:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Oct 2023 02:35:08 +0000   Fri, 06 Oct 2023 02:35:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-951739
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b99b7b4ec944a50a7b3414e6c768e4e
	  System UUID:                6ecab73d-423b-4669-9b10-d1296bdfee0e
	  Boot ID:                    d6810820-8fb1-4098-8489-41f3441712b9
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-z7b7t                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 coredns-5dd5756b68-tswm4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     95s
	  kube-system                 etcd-multinode-951739                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         108s
	  kube-system                 kindnet-6r6sg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      96s
	  kube-system                 kube-apiserver-multinode-951739             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-controller-manager-multinode-951739    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-proxy-lwrtj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-scheduler-multinode-951739             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 94s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node multinode-951739 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node multinode-951739 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node multinode-951739 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node multinode-951739 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node multinode-951739 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node multinode-951739 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           96s                  node-controller  Node multinode-951739 event: Registered Node multinode-951739 in Controller
	  Normal  NodeReady                64s                  kubelet          Node multinode-951739 status is now: NodeReady
	
	
	Name:               multinode-951739-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-951739-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Oct 2023 02:35:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-951739-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Oct 2023 02:36:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Oct 2023 02:36:00 +0000   Fri, 06 Oct 2023 02:35:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Oct 2023 02:36:00 +0000   Fri, 06 Oct 2023 02:35:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Oct 2023 02:36:00 +0000   Fri, 06 Oct 2023 02:35:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Oct 2023 02:36:00 +0000   Fri, 06 Oct 2023 02:36:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-951739-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 c65b8372807f4eae973e8cb62573a6b8
	  System UUID:                ccef6018-0622-4a1c-95d2-89737eefb379
	  Boot ID:                    d6810820-8fb1-4098-8489-41f3441712b9
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-qkd4k    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-vgwj8               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      43s
	  kube-system                 kube-proxy-7jkqc            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 42s                kube-proxy       
	  Normal  NodeHasSufficientMemory  44s (x5 over 45s)  kubelet          Node multinode-951739-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x5 over 45s)  kubelet          Node multinode-951739-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x5 over 45s)  kubelet          Node multinode-951739-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node multinode-951739-m02 event: Registered Node multinode-951739-m02 in Controller
	  Normal  NodeReady                12s                kubelet          Node multinode-951739-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001054] FS-Cache: O-key=[8] '276a3b0000000000'
	[  +0.000712] FS-Cache: N-cookie c=0000009c [p=00000093 fl=2 nc=0 na=1]
	[  +0.000923] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000ea8162ba
	[  +0.001002] FS-Cache: N-key=[8] '276a3b0000000000'
	[  +0.002663] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=00000096 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000920] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000fc3db6f4
	[  +0.000983] FS-Cache: O-key=[8] '276a3b0000000000'
	[  +0.000674] FS-Cache: N-cookie c=0000009d [p=00000093 fl=2 nc=0 na=1]
	[  +0.000946] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000b9bd865e
	[  +0.000999] FS-Cache: N-key=[8] '276a3b0000000000'
	[  +2.732427] FS-Cache: Duplicate cookie detected
	[  +0.000701] FS-Cache: O-cookie c=00000094 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000d93f34d8
	[  +0.000995] FS-Cache: O-key=[8] '266a3b0000000000'
	[  +0.000657] FS-Cache: N-cookie c=0000009f [p=00000093 fl=2 nc=0 na=1]
	[  +0.000872] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=000000000c4a9176
	[  +0.000974] FS-Cache: N-key=[8] '266a3b0000000000'
	[  +0.306196] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=00000099 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000922] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=0000000019ad38e0
	[  +0.001027] FS-Cache: O-key=[8] '2e6a3b0000000000'
	[  +0.000669] FS-Cache: N-cookie c=000000a0 [p=00000093 fl=2 nc=0 na=1]
	[  +0.000879] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000ea8162ba
	[  +0.000981] FS-Cache: N-key=[8] '2e6a3b0000000000'
	
	* 
	* ==> etcd [58e9b6192941e9ef2d42353ad932f381d777ca4145eb3f3df2803ec216f60692] <==
	* {"level":"info","ts":"2023-10-06T02:34:17.309279Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-06T02:34:17.309458Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-06T02:34:17.309484Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-06T02:34:17.309535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-10-06T02:34:17.309716Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-10-06T02:34:17.327075Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-06T02:34:17.327131Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-06T02:34:17.883132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-06T02:34:17.883254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-06T02:34:17.883305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-10-06T02:34:17.883346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-06T02:34:17.883386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-06T02:34:17.88343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-06T02:34:17.883463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-06T02:34:17.891337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-06T02:34:17.892388Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-06T02:34:17.892506Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-06T02:34:17.891306Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-951739 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-06T02:34:17.895084Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-06T02:34:17.89543Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-06T02:34:17.89551Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-06T02:34:17.895535Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-06T02:34:17.895576Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-06T02:34:17.89559Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-06T02:34:17.896384Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  02:36:12 up 12:18,  0 users,  load average: 1.19, 1.81, 1.90
	Linux multinode-951739 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [c96ef03c71659ce10974ab22831577bcc6a791530b03c69ed86c23ed2bfd49d2] <==
	* I1006 02:34:38.131956       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1006 02:35:08.447853       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1006 02:35:08.462549       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1006 02:35:08.462580       1 main.go:227] handling current node
	I1006 02:35:18.471328       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1006 02:35:18.471357       1 main.go:227] handling current node
	I1006 02:35:28.483366       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1006 02:35:28.483398       1 main.go:227] handling current node
	I1006 02:35:38.488500       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1006 02:35:38.488527       1 main.go:227] handling current node
	I1006 02:35:38.488537       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1006 02:35:38.488544       1 main.go:250] Node multinode-951739-m02 has CIDR [10.244.1.0/24] 
	I1006 02:35:38.488703       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1006 02:35:48.498967       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1006 02:35:48.498994       1 main.go:227] handling current node
	I1006 02:35:48.499006       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1006 02:35:48.499012       1 main.go:250] Node multinode-951739-m02 has CIDR [10.244.1.0/24] 
	I1006 02:35:58.511665       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1006 02:35:58.511694       1 main.go:227] handling current node
	I1006 02:35:58.511704       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1006 02:35:58.511711       1 main.go:250] Node multinode-951739-m02 has CIDR [10.244.1.0/24] 
	I1006 02:36:08.523276       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1006 02:36:08.523300       1 main.go:227] handling current node
	I1006 02:36:08.523311       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1006 02:36:08.523317       1 main.go:250] Node multinode-951739-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [10d6403370796cae7254428f4066e97e6b1b8683539dcf4e973259dec188288c] <==
	* I1006 02:34:20.929456       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I1006 02:34:21.279601       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1006 02:34:21.279606       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1006 02:34:20.905651       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1006 02:34:21.300929       1 controller.go:624] quota admission added evaluator for: namespaces
	I1006 02:34:21.314263       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 02:34:21.364030       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 02:34:21.379437       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1006 02:34:21.379564       1 shared_informer.go:318] Caches are synced for configmaps
	I1006 02:34:21.917602       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1006 02:34:21.926272       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1006 02:34:21.926357       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 02:34:22.591748       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 02:34:22.633584       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 02:34:22.730736       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1006 02:34:22.738726       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1006 02:34:22.739842       1 controller.go:624] quota admission added evaluator for: endpoints
	I1006 02:34:22.744434       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 02:34:23.084824       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1006 02:34:24.316189       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1006 02:34:24.331298       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1006 02:34:24.345716       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1006 02:34:36.355704       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1006 02:34:36.647402       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1006 02:36:07.775433       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x4006d02c90), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x40051bf180), ResponseWriter:(*httpsnoop.rw)(0x40051bf180), Flusher:(*httpsnoop.rw)(0x40051bf180), CloseNotifier:(*httpsnoop.rw)(0x40051bf180), Pusher:(*httpsnoop.rw)(0x40051bf180)}}, encoder:(*versioning.codec)(0x400210b4a0), memAllocator:(*runtime.Allocator)(0x4005e5dce0)})
	
	* 
	* ==> kube-controller-manager [c4c99a6217e71ef233abf74e65236b33086c4e1ad7333936b74ef66527b2c169] <==
	* I1006 02:34:37.418065       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.852599ms"
	I1006 02:34:37.418368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.171µs"
	I1006 02:35:08.628895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.265µs"
	I1006 02:35:08.647561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.684µs"
	I1006 02:35:09.687672       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.280079ms"
	I1006 02:35:09.687964       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.466µs"
	I1006 02:35:11.007425       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1006 02:35:29.015128       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-951739-m02\" does not exist"
	I1006 02:35:29.031836       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-951739-m02" podCIDRs=["10.244.1.0/24"]
	I1006 02:35:29.052154       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vgwj8"
	I1006 02:35:29.052267       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7jkqc"
	I1006 02:35:31.010792       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-951739-m02"
	I1006 02:35:31.011255       1 event.go:307] "Event occurred" object="multinode-951739-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-951739-m02 event: Registered Node multinode-951739-m02 in Controller"
	I1006 02:36:00.756527       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-951739-m02"
	I1006 02:36:03.123176       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1006 02:36:03.140335       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-qkd4k"
	I1006 02:36:03.164366       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-z7b7t"
	I1006 02:36:03.198170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="78.962111ms"
	I1006 02:36:03.233437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.193051ms"
	I1006 02:36:03.249047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.556763ms"
	I1006 02:36:03.249151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.95µs"
	I1006 02:36:05.648363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.096529ms"
	I1006 02:36:05.648551       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="45.719µs"
	I1006 02:36:07.766868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.704226ms"
	I1006 02:36:07.767112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="98.732µs"
	
	* 
	* ==> kube-proxy [d4cfb59486e3ac7420d218226e86b7497867086263b53422c21d5373df970734] <==
	* I1006 02:34:38.198863       1 server_others.go:69] "Using iptables proxy"
	I1006 02:34:38.216907       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1006 02:34:38.341590       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 02:34:38.343903       1 server_others.go:152] "Using iptables Proxier"
	I1006 02:34:38.343938       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1006 02:34:38.343945       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1006 02:34:38.344020       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1006 02:34:38.344292       1 server.go:846] "Version info" version="v1.28.2"
	I1006 02:34:38.344308       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 02:34:38.346733       1 config.go:188] "Starting service config controller"
	I1006 02:34:38.346761       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1006 02:34:38.346784       1 config.go:97] "Starting endpoint slice config controller"
	I1006 02:34:38.346788       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1006 02:34:38.347675       1 config.go:315] "Starting node config controller"
	I1006 02:34:38.349040       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1006 02:34:38.447432       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1006 02:34:38.447434       1 shared_informer.go:318] Caches are synced for service config
	I1006 02:34:38.449879       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2ddc38a919ab72da6e36b45ac65f92843822b5fb03bf0e2337a5f2879badd651] <==
	* W1006 02:34:22.421062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1006 02:34:22.421731       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1006 02:34:22.421105       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1006 02:34:22.421843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1006 02:34:22.421169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1006 02:34:22.421870       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1006 02:34:22.421218       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1006 02:34:22.421884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1006 02:34:22.421293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1006 02:34:22.421897       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1006 02:34:22.421376       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1006 02:34:22.421815       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1006 02:34:22.421943       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1006 02:34:22.421950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1006 02:34:22.421415       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1006 02:34:22.421965       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1006 02:34:22.421506       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1006 02:34:22.421977       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1006 02:34:22.421552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1006 02:34:22.422002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1006 02:34:22.421585       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1006 02:34:22.422017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1006 02:34:22.421645       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1006 02:34:22.422029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1006 02:34:23.806459       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 06 02:34:37 multinode-951739 kubelet[1396]: E1006 02:34:37.301496    1396 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 06 02:34:37 multinode-951739 kubelet[1396]: E1006 02:34:37.301531    1396 projected.go:198] Error preparing data for projected volume kube-api-access-c6gb9 for pod kube-system/kube-proxy-lwrtj: configmap "kube-root-ca.crt" not found
	Oct 06 02:34:37 multinode-951739 kubelet[1396]: E1006 02:34:37.301583    1396 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a24c85d4-5722-49cd-bfd9-adc611cca199-kube-api-access-c6gb9 podName:a24c85d4-5722-49cd-bfd9-adc611cca199 nodeName:}" failed. No retries permitted until 2023-10-06 02:34:37.801563659 +0000 UTC m=+13.522852749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c6gb9" (UniqueName: "kubernetes.io/projected/a24c85d4-5722-49cd-bfd9-adc611cca199-kube-api-access-c6gb9") pod "kube-proxy-lwrtj" (UID: "a24c85d4-5722-49cd-bfd9-adc611cca199") : configmap "kube-root-ca.crt" not found
	Oct 06 02:34:37 multinode-951739 kubelet[1396]: W1006 02:34:37.976567    1396 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9edc152e73eb4597075d85be5cd5b8d85a9eb4dcbd14cb603ed7c177be913edc/crio-ef3d818b5fbb241ccd220774379e4c27bd6806c2232c0b080b4540a752ef4e44 WatchSource:0}: Error finding container ef3d818b5fbb241ccd220774379e4c27bd6806c2232c0b080b4540a752ef4e44: Status 404 returned error can't find the container with id ef3d818b5fbb241ccd220774379e4c27bd6806c2232c0b080b4540a752ef4e44
	Oct 06 02:34:38 multinode-951739 kubelet[1396]: I1006 02:34:38.619839    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-6r6sg" podStartSLOduration=2.619793996 podCreationTimestamp="2023-10-06 02:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-06 02:34:38.605454847 +0000 UTC m=+14.326743945" watchObservedRunningTime="2023-10-06 02:34:38.619793996 +0000 UTC m=+14.341083094"
	Oct 06 02:34:44 multinode-951739 kubelet[1396]: I1006 02:34:44.484152    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lwrtj" podStartSLOduration=8.484101156 podCreationTimestamp="2023-10-06 02:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-06 02:34:38.622294016 +0000 UTC m=+14.343583106" watchObservedRunningTime="2023-10-06 02:34:44.484101156 +0000 UTC m=+20.205390246"
	Oct 06 02:35:08 multinode-951739 kubelet[1396]: I1006 02:35:08.598284    1396 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 06 02:35:08 multinode-951739 kubelet[1396]: I1006 02:35:08.628307    1396 topology_manager.go:215] "Topology Admit Handler" podUID="82def565-623c-4885-b2a3-87c5302c1841" podNamespace="kube-system" podName="coredns-5dd5756b68-tswm4"
	Oct 06 02:35:08 multinode-951739 kubelet[1396]: I1006 02:35:08.633417    1396 topology_manager.go:215] "Topology Admit Handler" podUID="473466d1-f407-4b35-b662-880c7ee0439a" podNamespace="kube-system" podName="storage-provisioner"
	Oct 06 02:35:08 multinode-951739 kubelet[1396]: I1006 02:35:08.766793    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwr6p\" (UniqueName: \"kubernetes.io/projected/82def565-623c-4885-b2a3-87c5302c1841-kube-api-access-gwr6p\") pod \"coredns-5dd5756b68-tswm4\" (UID: \"82def565-623c-4885-b2a3-87c5302c1841\") " pod="kube-system/coredns-5dd5756b68-tswm4"
	Oct 06 02:35:08 multinode-951739 kubelet[1396]: I1006 02:35:08.766851    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krcls\" (UniqueName: \"kubernetes.io/projected/473466d1-f407-4b35-b662-880c7ee0439a-kube-api-access-krcls\") pod \"storage-provisioner\" (UID: \"473466d1-f407-4b35-b662-880c7ee0439a\") " pod="kube-system/storage-provisioner"
	Oct 06 02:35:08 multinode-951739 kubelet[1396]: I1006 02:35:08.766881    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82def565-623c-4885-b2a3-87c5302c1841-config-volume\") pod \"coredns-5dd5756b68-tswm4\" (UID: \"82def565-623c-4885-b2a3-87c5302c1841\") " pod="kube-system/coredns-5dd5756b68-tswm4"
	Oct 06 02:35:08 multinode-951739 kubelet[1396]: I1006 02:35:08.766906    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/473466d1-f407-4b35-b662-880c7ee0439a-tmp\") pod \"storage-provisioner\" (UID: \"473466d1-f407-4b35-b662-880c7ee0439a\") " pod="kube-system/storage-provisioner"
	Oct 06 02:35:08 multinode-951739 kubelet[1396]: W1006 02:35:08.988168    1396 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9edc152e73eb4597075d85be5cd5b8d85a9eb4dcbd14cb603ed7c177be913edc/crio-f50a1fd9a10673f783274763c9bb56e80cd99b85787ce69db807ee4eb2390338 WatchSource:0}: Error finding container f50a1fd9a10673f783274763c9bb56e80cd99b85787ce69db807ee4eb2390338: Status 404 returned error can't find the container with id f50a1fd9a10673f783274763c9bb56e80cd99b85787ce69db807ee4eb2390338
	Oct 06 02:35:08 multinode-951739 kubelet[1396]: W1006 02:35:08.992405    1396 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9edc152e73eb4597075d85be5cd5b8d85a9eb4dcbd14cb603ed7c177be913edc/crio-019d2a5a4ae49d599d880030633205c27a900ac3fcfdcf2872914ffe0d22362a WatchSource:0}: Error finding container 019d2a5a4ae49d599d880030633205c27a900ac3fcfdcf2872914ffe0d22362a: Status 404 returned error can't find the container with id 019d2a5a4ae49d599d880030633205c27a900ac3fcfdcf2872914ffe0d22362a
	Oct 06 02:35:09 multinode-951739 kubelet[1396]: I1006 02:35:09.670400    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.670355316 podCreationTimestamp="2023-10-06 02:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-06 02:35:09.6570501 +0000 UTC m=+45.378339189" watchObservedRunningTime="2023-10-06 02:35:09.670355316 +0000 UTC m=+45.391644406"
	Oct 06 02:36:03 multinode-951739 kubelet[1396]: I1006 02:36:03.185245    1396 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-tswm4" podStartSLOduration=86.185204075 podCreationTimestamp="2023-10-06 02:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-06 02:35:09.671701842 +0000 UTC m=+45.392990948" watchObservedRunningTime="2023-10-06 02:36:03.185204075 +0000 UTC m=+98.906493165"
	Oct 06 02:36:03 multinode-951739 kubelet[1396]: I1006 02:36:03.185451    1396 topology_manager.go:215] "Topology Admit Handler" podUID="13da5d37-0236-42ca-a852-eeecfae7de4f" podNamespace="default" podName="busybox-5bc68d56bd-z7b7t"
	Oct 06 02:36:03 multinode-951739 kubelet[1396]: W1006 02:36:03.196484    1396 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-951739" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-951739' and this object
	Oct 06 02:36:03 multinode-951739 kubelet[1396]: E1006 02:36:03.196534    1396 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-951739" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-951739' and this object
	Oct 06 02:36:03 multinode-951739 kubelet[1396]: I1006 02:36:03.304070    1396 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz84h\" (UniqueName: \"kubernetes.io/projected/13da5d37-0236-42ca-a852-eeecfae7de4f-kube-api-access-bz84h\") pod \"busybox-5bc68d56bd-z7b7t\" (UID: \"13da5d37-0236-42ca-a852-eeecfae7de4f\") " pod="default/busybox-5bc68d56bd-z7b7t"
	Oct 06 02:36:04 multinode-951739 kubelet[1396]: E1006 02:36:04.415304    1396 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 06 02:36:04 multinode-951739 kubelet[1396]: E1006 02:36:04.415363    1396 projected.go:198] Error preparing data for projected volume kube-api-access-bz84h for pod default/busybox-5bc68d56bd-z7b7t: failed to sync configmap cache: timed out waiting for the condition
	Oct 06 02:36:04 multinode-951739 kubelet[1396]: E1006 02:36:04.415464    1396 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13da5d37-0236-42ca-a852-eeecfae7de4f-kube-api-access-bz84h podName:13da5d37-0236-42ca-a852-eeecfae7de4f nodeName:}" failed. No retries permitted until 2023-10-06 02:36:04.915434707 +0000 UTC m=+100.636723797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bz84h" (UniqueName: "kubernetes.io/projected/13da5d37-0236-42ca-a852-eeecfae7de4f-kube-api-access-bz84h") pod "busybox-5bc68d56bd-z7b7t" (UID: "13da5d37-0236-42ca-a852-eeecfae7de4f") : failed to sync configmap cache: timed out waiting for the condition
	Oct 06 02:36:05 multinode-951739 kubelet[1396]: W1006 02:36:05.041091    1396 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9edc152e73eb4597075d85be5cd5b8d85a9eb4dcbd14cb603ed7c177be913edc/crio-72e7493930118c11506162250ff871b7c1f04cece0a65d835d9abc7c49b4aa30 WatchSource:0}: Error finding container 72e7493930118c11506162250ff871b7c1f04cece0a65d835d9abc7c49b4aa30: Status 404 returned error can't find the container with id 72e7493930118c11506162250ff871b7c1f04cece0a65d835d9abc7c49b4aa30
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-951739 -n multinode-951739
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-951739 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.3452026448.exe start -p running-upgrade-637320 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.3452026448.exe start -p running-upgrade-637320 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.382542078s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-637320 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-637320 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.977408084s)

                                                
                                                
-- stdout --
	* [running-upgrade-637320] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-637320 in cluster running-upgrade-637320
	* Pulling base image ...
	* Updating the running docker "running-upgrade-637320" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 02:51:52.959473 2395602 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:51:52.959711 2395602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:51:52.959737 2395602 out.go:309] Setting ErrFile to fd 2...
	I1006 02:51:52.959757 2395602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:51:52.960044 2395602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:51:52.960612 2395602 out.go:303] Setting JSON to false
	I1006 02:51:52.962012 2395602 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":45259,"bootTime":1696515454,"procs":367,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:51:52.962121 2395602 start.go:138] virtualization:  
	I1006 02:51:52.966127 2395602 out.go:177] * [running-upgrade-637320] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1006 02:51:52.970662 2395602 notify.go:220] Checking for updates...
	I1006 02:51:52.971493 2395602 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 02:51:52.973548 2395602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:51:52.975836 2395602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:51:52.977736 2395602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:51:52.979645 2395602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 02:51:52.981422 2395602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 02:51:52.983726 2395602 config.go:182] Loaded profile config "running-upgrade-637320": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1006 02:51:52.986603 2395602 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1006 02:51:52.988795 2395602 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:51:53.047253 2395602 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:51:53.047357 2395602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:51:53.205298 2395602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-10-06 02:51:53.194038816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:51:53.205493 2395602 docker.go:295] overlay module found
	I1006 02:51:53.208008 2395602 out.go:177] * Using the docker driver based on existing profile
	I1006 02:51:53.209974 2395602 start.go:298] selected driver: docker
	I1006 02:51:53.209994 2395602 start.go:902] validating driver "docker" against &{Name:running-upgrade-637320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-637320 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.137 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1006 02:51:53.210129 2395602 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 02:51:53.211356 2395602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:51:53.317413 2395602 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-10-06 02:51:53.30444049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:51:53.317783 2395602 cni.go:84] Creating CNI manager for ""
	I1006 02:51:53.317794 2395602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:51:53.317812 2395602 start_flags.go:323] config:
	{Name:running-upgrade-637320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-637320 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.137 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1006 02:51:53.320098 2395602 out.go:177] * Starting control plane node running-upgrade-637320 in cluster running-upgrade-637320
	I1006 02:51:53.321874 2395602 cache.go:122] Beginning downloading kic base image for docker with crio
	I1006 02:51:53.324721 2395602 out.go:177] * Pulling base image ...
	I1006 02:51:53.326630 2395602 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1006 02:51:53.326700 2395602 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1006 02:51:53.345219 2395602 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1006 02:51:53.345241 2395602 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1006 02:51:53.405744 2395602 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1006 02:51:53.405932 2395602 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/running-upgrade-637320/config.json ...
	I1006 02:51:53.405981 2395602 cache.go:107] acquiring lock: {Name:mkebee88fce238ff0e7e787ed96d7d7331a3727b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:51:53.406072 2395602 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1006 02:51:53.406083 2395602 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 108.669µs
	I1006 02:51:53.406092 2395602 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1006 02:51:53.406102 2395602 cache.go:107] acquiring lock: {Name:mk46ae56fe46f5390180d521a6cc721035e56a8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:51:53.406132 2395602 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1006 02:51:53.406136 2395602 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 35.882µs
	I1006 02:51:53.406143 2395602 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1006 02:51:53.406152 2395602 cache.go:107] acquiring lock: {Name:mk86e13e6b3b437bb7e26c6b60be830b430c6e39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:51:53.406177 2395602 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1006 02:51:53.406184 2395602 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.073µs
	I1006 02:51:53.406190 2395602 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1006 02:51:53.406201 2395602 cache.go:195] Successfully downloaded all kic artifacts
	I1006 02:51:53.406198 2395602 cache.go:107] acquiring lock: {Name:mkd5603e96a2ebfc7e6761dedb6f700bf1e3e05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:51:53.406227 2395602 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1006 02:51:53.406231 2395602 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 34.052µs
	I1006 02:51:53.406238 2395602 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1006 02:51:53.406247 2395602 cache.go:107] acquiring lock: {Name:mk9386f4a4cab443db6cd71364319df13794a376 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:51:53.406258 2395602 start.go:365] acquiring machines lock for running-upgrade-637320: {Name:mkdfb50f8a36003ae936b6912e9ce832e3cb348d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:51:53.406270 2395602 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1006 02:51:53.406275 2395602 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 29.358µs
	I1006 02:51:53.406283 2395602 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1006 02:51:53.406298 2395602 start.go:369] acquired machines lock for "running-upgrade-637320" in 26.905µs
	I1006 02:51:53.406301 2395602 cache.go:107] acquiring lock: {Name:mk2d20f834ef30a07ef1d766d525a500dd7188a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:51:53.406312 2395602 start.go:96] Skipping create...Using existing machine configuration
	I1006 02:51:53.406318 2395602 fix.go:54] fixHost starting: 
	I1006 02:51:53.406326 2395602 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1006 02:51:53.406331 2395602 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 31.294µs
	I1006 02:51:53.406337 2395602 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1006 02:51:53.406345 2395602 cache.go:107] acquiring lock: {Name:mk543ec0afb71de658a5f310eba68d04a247c52f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:51:53.406371 2395602 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1006 02:51:53.406375 2395602 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 31.147µs
	I1006 02:51:53.406381 2395602 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1006 02:51:53.406392 2395602 cache.go:107] acquiring lock: {Name:mk1d388e71632749d28243135d046b61853f0ee4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:51:53.406416 2395602 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1006 02:51:53.406421 2395602 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 33.124µs
	I1006 02:51:53.406427 2395602 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1006 02:51:53.406447 2395602 cache.go:87] Successfully saved all images to host disk.
	I1006 02:51:53.406618 2395602 cli_runner.go:164] Run: docker container inspect running-upgrade-637320 --format={{.State.Status}}
	I1006 02:51:53.440830 2395602 fix.go:102] recreateIfNeeded on running-upgrade-637320: state=Running err=<nil>
	W1006 02:51:53.440860 2395602 fix.go:128] unexpected machine state, will restart: <nil>
	I1006 02:51:53.444394 2395602 out.go:177] * Updating the running docker "running-upgrade-637320" container ...
	I1006 02:51:53.446147 2395602 machine.go:88] provisioning docker machine ...
	I1006 02:51:53.446178 2395602 ubuntu.go:169] provisioning hostname "running-upgrade-637320"
	I1006 02:51:53.446304 2395602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-637320
	I1006 02:51:53.483246 2395602 main.go:141] libmachine: Using SSH client type: native
	I1006 02:51:53.483670 2395602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35450 <nil> <nil>}
	I1006 02:51:53.483687 2395602 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-637320 && echo "running-upgrade-637320" | sudo tee /etc/hostname
	I1006 02:51:53.664574 2395602 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-637320
	
	I1006 02:51:53.664669 2395602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-637320
	I1006 02:51:53.684364 2395602 main.go:141] libmachine: Using SSH client type: native
	I1006 02:51:53.684763 2395602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35450 <nil> <nil>}
	I1006 02:51:53.684785 2395602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-637320' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-637320/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-637320' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 02:51:53.832857 2395602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 02:51:53.832884 2395602 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17314-2262959/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-2262959/.minikube}
	I1006 02:51:53.832906 2395602 ubuntu.go:177] setting up certificates
	I1006 02:51:53.832918 2395602 provision.go:83] configureAuth start
	I1006 02:51:53.832983 2395602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-637320
	I1006 02:51:53.856668 2395602 provision.go:138] copyHostCerts
	I1006 02:51:53.856747 2395602 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem, removing ...
	I1006 02:51:53.856770 2395602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:51:53.856845 2395602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem (1082 bytes)
	I1006 02:51:53.856939 2395602 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem, removing ...
	I1006 02:51:53.856945 2395602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:51:53.856971 2395602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem (1123 bytes)
	I1006 02:51:53.857021 2395602 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem, removing ...
	I1006 02:51:53.857026 2395602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:51:53.857048 2395602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem (1675 bytes)
	I1006 02:51:53.857091 2395602 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-637320 san=[192.168.70.137 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-637320]
	I1006 02:51:55.073502 2395602 provision.go:172] copyRemoteCerts
	I1006 02:51:55.073580 2395602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 02:51:55.073628 2395602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-637320
	I1006 02:51:55.095377 2395602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35450 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/running-upgrade-637320/id_rsa Username:docker}
	I1006 02:51:55.197972 2395602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 02:51:55.221932 2395602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1006 02:51:55.247786 2395602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 02:51:55.275342 2395602 provision.go:86] duration metric: configureAuth took 1.442407266s
	I1006 02:51:55.275369 2395602 ubuntu.go:193] setting minikube options for container-runtime
	I1006 02:51:55.275570 2395602 config.go:182] Loaded profile config "running-upgrade-637320": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1006 02:51:55.275691 2395602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-637320
	I1006 02:51:55.297161 2395602 main.go:141] libmachine: Using SSH client type: native
	I1006 02:51:55.297592 2395602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35450 <nil> <nil>}
	I1006 02:51:55.297626 2395602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 02:51:56.106799 2395602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 02:51:56.106844 2395602 machine.go:91] provisioned docker machine in 2.660677253s
	I1006 02:51:56.106869 2395602 start.go:300] post-start starting for "running-upgrade-637320" (driver="docker")
	I1006 02:51:56.106889 2395602 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 02:51:56.106963 2395602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 02:51:56.107018 2395602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-637320
	I1006 02:51:56.146395 2395602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35450 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/running-upgrade-637320/id_rsa Username:docker}
	I1006 02:51:56.269995 2395602 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 02:51:56.276134 2395602 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1006 02:51:56.276173 2395602 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 02:51:56.276196 2395602 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1006 02:51:56.276207 2395602 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1006 02:51:56.276222 2395602 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/addons for local assets ...
	I1006 02:51:56.276317 2395602 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/files for local assets ...
	I1006 02:51:56.276426 2395602 filesync.go:149] local asset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> 22683062.pem in /etc/ssl/certs
	I1006 02:51:56.276594 2395602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 02:51:56.296117 2395602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:51:56.460509 2395602 start.go:303] post-start completed in 353.618717ms
	I1006 02:51:56.460619 2395602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:51:56.460688 2395602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-637320
	I1006 02:51:56.493098 2395602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35450 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/running-upgrade-637320/id_rsa Username:docker}
	I1006 02:51:56.632113 2395602 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 02:51:56.639973 2395602 fix.go:56] fixHost completed within 3.23364447s
	I1006 02:51:56.639993 2395602 start.go:83] releasing machines lock for "running-upgrade-637320", held for 3.233687219s
	I1006 02:51:56.640064 2395602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-637320
	I1006 02:51:56.665119 2395602 ssh_runner.go:195] Run: cat /version.json
	I1006 02:51:56.665152 2395602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 02:51:56.665182 2395602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-637320
	I1006 02:51:56.665197 2395602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-637320
	I1006 02:51:56.695203 2395602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35450 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/running-upgrade-637320/id_rsa Username:docker}
	I1006 02:51:56.703637 2395602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35450 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/running-upgrade-637320/id_rsa Username:docker}
	W1006 02:51:56.970591 2395602 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1006 02:51:56.970679 2395602 ssh_runner.go:195] Run: systemctl --version
	I1006 02:51:56.987365 2395602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 02:51:57.285094 2395602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 02:51:57.292150 2395602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:51:57.321427 2395602 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1006 02:51:57.321573 2395602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:51:57.480114 2395602 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 02:51:57.480175 2395602 start.go:472] detecting cgroup driver to use...
	I1006 02:51:57.480235 2395602 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1006 02:51:57.480321 2395602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 02:51:57.618560 2395602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 02:51:57.639795 2395602 docker.go:198] disabling cri-docker service (if available) ...
	I1006 02:51:57.639954 2395602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 02:51:57.671461 2395602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 02:51:57.696490 2395602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1006 02:51:57.735088 2395602 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1006 02:51:57.735213 2395602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 02:51:58.190465 2395602 docker.go:214] disabling docker service ...
	I1006 02:51:58.190577 2395602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 02:51:58.206863 2395602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 02:51:58.220358 2395602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 02:51:58.447256 2395602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 02:51:58.701831 2395602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 02:51:58.736566 2395602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 02:51:58.779998 2395602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1006 02:51:58.780120 2395602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:51:58.833887 2395602 out.go:177] 
	W1006 02:51:58.835933 2395602 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1006 02:51:58.836111 2395602 out.go:239] * 
	* 
	W1006 02:51:58.837231 2395602 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 02:51:58.838818 2395602 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-637320 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-06 02:51:58.875940619 +0000 UTC m=+2455.010618794
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-637320
helpers_test.go:235: (dbg) docker inspect running-upgrade-637320:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3a70d6be6bfe068064bfdf1731ac755ac4c2aff1a86a14acea5236fef58010c",
	        "Created": "2023-10-06T02:51:01.424706946Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2390629,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-06T02:51:01.983293447Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/d3a70d6be6bfe068064bfdf1731ac755ac4c2aff1a86a14acea5236fef58010c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3a70d6be6bfe068064bfdf1731ac755ac4c2aff1a86a14acea5236fef58010c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3a70d6be6bfe068064bfdf1731ac755ac4c2aff1a86a14acea5236fef58010c/hosts",
	        "LogPath": "/var/lib/docker/containers/d3a70d6be6bfe068064bfdf1731ac755ac4c2aff1a86a14acea5236fef58010c/d3a70d6be6bfe068064bfdf1731ac755ac4c2aff1a86a14acea5236fef58010c-json.log",
	        "Name": "/running-upgrade-637320",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-637320:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-637320",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/47db23e38780be174c7d038fca01ed03795e61c290c05842fe93f1a5c6b0a8f0-init/diff:/var/lib/docker/overlay2/fe62f95b947a2754fde5f883d4293a92f922b968fb4bd7f3654a2c7bdbc522f2/diff:/var/lib/docker/overlay2/11046de113c043eb3cc53cd5095b99a6192e45e43ff2bf27563a50c6b83509ac/diff:/var/lib/docker/overlay2/66e29f554f934e3041176fcf2587bcf4ba25b124079adb8f1547cc4de718c9df/diff:/var/lib/docker/overlay2/057007ae6daad8d784a1c10fb40b43c77d3d01565ed2d20eb8ecd202157f8c55/diff:/var/lib/docker/overlay2/f2912256efe331942aeb0729d132f576df35acacc791a93a62c5ca317b2bcf8e/diff:/var/lib/docker/overlay2/e2cd22e7e313277c859907c960e2cbe37d8a7378a8751a948ea4003acd61bd6d/diff:/var/lib/docker/overlay2/99dd6ddea4e90170f3ce60454ce71d27fb9733787e59844a90b1f0938652161a/diff:/var/lib/docker/overlay2/704d5b948caf53be47cf27d86a9689298a28a774534b248b7c9d3974e54c2b66/diff:/var/lib/docker/overlay2/a5fbb8d76b9f8b317e00d88f17e91126d413d47eba60d3660e0c3da1d8ed0afc/diff:/var/lib/docker/overlay2/b5b709
0da5d390fa7807765440ba718153e91b87f2c6384c0c16e5a0ffd9bc91/diff:/var/lib/docker/overlay2/fc16d884e053237d0cdc879e1c890bd6d12616ce175347461af04475eef61782/diff:/var/lib/docker/overlay2/2b235e1fc76b03c061790bd587390d49c3113f5250fd6ec516e9b6df1962dfd0/diff:/var/lib/docker/overlay2/c18ea963d54a407a70c2dd1741565e1de1733536263a94ead2696362b67d93c8/diff:/var/lib/docker/overlay2/1aec82aa59cab0b3cab30746fd525f4f021fdbb6ff4cc6f96837bccf40fea3bb/diff:/var/lib/docker/overlay2/4256fb7513050d910a7b9f050d9f93486c5ccef0a946f0b15d058540511f4be3/diff:/var/lib/docker/overlay2/c6d85f101c3f5dde457f99b03032c44f5ab2f02f223cf4a444c5a5adfa149686/diff:/var/lib/docker/overlay2/d10583c5bd3954a7caec9c7a12061715c4939275cb95a38f9b5de669e56dbd00/diff:/var/lib/docker/overlay2/d9945a6dc28d46858ff81789fe5385a1cf01d5323f3d50386075a8c260cc48e3/diff:/var/lib/docker/overlay2/200a5bb7f2f41747f2695d06d8ff06c3efd7dd349ea02cebbe16d3f1580b4d3c/diff:/var/lib/docker/overlay2/558fc29f8666f5e8da6ee4c15cbd8004e002a37b90dbdaf2b21b2cda6c2a666f/diff:/var/lib/d
ocker/overlay2/cb8dfe62af2bf77453cc4e816b99f40082f2907bef2b3463a904b24d9ed13bd8/diff:/var/lib/docker/overlay2/ce9e2f6c6fa0270c63c0da2c1de3fcbc1e9d73c922cbe11138038e571135816f/diff:/var/lib/docker/overlay2/cd96b2445ea3bedb2aff6d9f6318d786441c08c592cab4f8a46f07f3a2e23b7e/diff:/var/lib/docker/overlay2/a9ab47b71ce4760e9f9476dc2b08c681d19fd05b75201d809653ab453d0f60eb/diff:/var/lib/docker/overlay2/b4ac75486fc8671cbd029e9947de123242f9ebe9781f6ff28d2a188fa5bbe95f/diff:/var/lib/docker/overlay2/23617da6d19b39241dddaf6bdece5a24480950318f069945c733d7a097b07397/diff:/var/lib/docker/overlay2/e4498e8ebea8ccf6d777e2debda425b0615293205de3f693edd758b02a4f3ad2/diff:/var/lib/docker/overlay2/e5bb475d80b646bb1fac2e91489351b118f1e96a7ed3803a703bf10f7659fdaf/diff:/var/lib/docker/overlay2/16847f3a99ff3b08c73ed46dc7a841c95fcc038ed6720f4608a8d828f5fdad41/diff:/var/lib/docker/overlay2/0e218c144fed594ef4296ed914b73d8797be09cb8ceb5e290accd2d89b8d8159/diff:/var/lib/docker/overlay2/c8925453ec9345ea7907b21857cfe3a0b0af3e9dc0743b6a966993f1422
6e755/diff:/var/lib/docker/overlay2/aa88d3ef0c1e7f8a334917b0083f3b5198cd93f6d313f3eb7e3043e93e2dd744/diff:/var/lib/docker/overlay2/020332a642dc2104372528a4bed139d6196bcbcf33e1b2fcd7bfefde293de5fe/diff:/var/lib/docker/overlay2/fe5f9352e8cff247613f285de138597654a75694b70ae17ce1dcaa8dec52fa30/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47db23e38780be174c7d038fca01ed03795e61c290c05842fe93f1a5c6b0a8f0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47db23e38780be174c7d038fca01ed03795e61c290c05842fe93f1a5c6b0a8f0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47db23e38780be174c7d038fca01ed03795e61c290c05842fe93f1a5c6b0a8f0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-637320",
	                "Source": "/var/lib/docker/volumes/running-upgrade-637320/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-637320",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-637320",
	                "name.minikube.sigs.k8s.io": "running-upgrade-637320",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f4e8b0168d042dff9c5188afa815b9e4bf13729bf06bb2823914798682d4f9ca",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35449"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35448"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35447"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f4e8b0168d04",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-637320": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.137"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d3a70d6be6bf",
	                        "running-upgrade-637320"
	                    ],
	                    "NetworkID": "93f527105516581ea4c7861c33f282cc5c577a93e1d4a00557540dcf5c6631b5",
	                    "EndpointID": "bc624fb2c365e6ffc3031031e6e6f6e64d37e6fc856882052cb16e8176f77d24",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.137",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:89",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-637320 -n running-upgrade-637320
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-637320 -n running-upgrade-637320: exit status 4 (678.715412ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 02:51:59.487280 2396452 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-637320" does not appear in /home/jenkins/minikube-integration/17314-2262959/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-637320" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-637320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-637320
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-637320: (3.804456548s)
--- FAIL: TestRunningBinaryUpgrade (77.08s)

                                                
                                    
x
+
TestMissingContainerUpgrade (146.89s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.1603406369.exe start -p missing-upgrade-093801 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.1603406369.exe start -p missing-upgrade-093801 --memory=2200 --driver=docker  --container-runtime=crio: (1m38.340139707s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-093801
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-093801: (4.665684667s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-093801
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-093801 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-093801 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (39.532687209s)

                                                
                                                
-- stdout --
	* [missing-upgrade-093801] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-093801 in cluster missing-upgrade-093801
	* Pulling base image ...
	* docker "missing-upgrade-093801" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 02:48:45.418724 2379407 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:48:45.418958 2379407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:48:45.418986 2379407 out.go:309] Setting ErrFile to fd 2...
	I1006 02:48:45.419005 2379407 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:48:45.419293 2379407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:48:45.419753 2379407 out.go:303] Setting JSON to false
	I1006 02:48:45.420787 2379407 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":45072,"bootTime":1696515454,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:48:45.420890 2379407 start.go:138] virtualization:  
	I1006 02:48:45.426568 2379407 out.go:177] * [missing-upgrade-093801] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1006 02:48:45.428758 2379407 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 02:48:45.428852 2379407 notify.go:220] Checking for updates...
	I1006 02:48:45.431997 2379407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:48:45.434726 2379407 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:48:45.436825 2379407 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:48:45.439219 2379407 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 02:48:45.441537 2379407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 02:48:45.444595 2379407 config.go:182] Loaded profile config "missing-upgrade-093801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1006 02:48:45.447436 2379407 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1006 02:48:45.449567 2379407 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:48:45.490185 2379407 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:48:45.490270 2379407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:48:45.612992 2379407 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2023-10-06 02:48:45.602882992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:48:45.613101 2379407 docker.go:295] overlay module found
	I1006 02:48:45.614968 2379407 out.go:177] * Using the docker driver based on existing profile
	I1006 02:48:45.616879 2379407 start.go:298] selected driver: docker
	I1006 02:48:45.616893 2379407 start.go:902] validating driver "docker" against &{Name:missing-upgrade-093801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-093801 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.106 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1006 02:48:45.616993 2379407 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 02:48:45.617611 2379407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:48:45.713880 2379407 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2023-10-06 02:48:45.701168029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:48:45.714171 2379407 cni.go:84] Creating CNI manager for ""
	I1006 02:48:45.714180 2379407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:48:45.714190 2379407 start_flags.go:323] config:
	{Name:missing-upgrade-093801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-093801 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.106 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1006 02:48:45.716450 2379407 out.go:177] * Starting control plane node missing-upgrade-093801 in cluster missing-upgrade-093801
	I1006 02:48:45.718775 2379407 cache.go:122] Beginning downloading kic base image for docker with crio
	I1006 02:48:45.721649 2379407 out.go:177] * Pulling base image ...
	I1006 02:48:45.723569 2379407 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1006 02:48:45.723772 2379407 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1006 02:48:45.761501 2379407 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1006 02:48:45.761753 2379407 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1006 02:48:45.761799 2379407 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1006 02:48:45.794805 2379407 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1006 02:48:45.794945 2379407 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/missing-upgrade-093801/config.json ...
	I1006 02:48:45.795319 2379407 cache.go:107] acquiring lock: {Name:mkebee88fce238ff0e7e787ed96d7d7331a3727b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:48:45.795419 2379407 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1006 02:48:45.795435 2379407 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 113.733µs
	I1006 02:48:45.795444 2379407 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1006 02:48:45.795455 2379407 cache.go:107] acquiring lock: {Name:mk46ae56fe46f5390180d521a6cc721035e56a8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:48:45.795537 2379407 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1006 02:48:45.795833 2379407 cache.go:107] acquiring lock: {Name:mk543ec0afb71de658a5f310eba68d04a247c52f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:48:45.795951 2379407 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1006 02:48:45.796052 2379407 cache.go:107] acquiring lock: {Name:mkd5603e96a2ebfc7e6761dedb6f700bf1e3e05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:48:45.796147 2379407 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1006 02:48:45.796243 2379407 cache.go:107] acquiring lock: {Name:mk9386f4a4cab443db6cd71364319df13794a376 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:48:45.796324 2379407 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1006 02:48:45.796422 2379407 cache.go:107] acquiring lock: {Name:mk2d20f834ef30a07ef1d766d525a500dd7188a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:48:45.796488 2379407 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1006 02:48:45.796707 2379407 cache.go:107] acquiring lock: {Name:mk1d388e71632749d28243135d046b61853f0ee4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:48:45.796789 2379407 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1006 02:48:45.795697 2379407 cache.go:107] acquiring lock: {Name:mk86e13e6b3b437bb7e26c6b60be830b430c6e39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:48:45.797078 2379407 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1006 02:48:45.799711 2379407 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1006 02:48:45.800146 2379407 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1006 02:48:45.800670 2379407 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1006 02:48:45.800981 2379407 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1006 02:48:45.801164 2379407 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1006 02:48:45.801216 2379407 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1006 02:48:45.802656 2379407 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1006 02:48:46.251607 2379407 cache.go:162] opening:  /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W1006 02:48:46.274265 2379407 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1006 02:48:46.274330 2379407 cache.go:162] opening:  /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I1006 02:48:46.278396 2379407 cache.go:162] opening:  /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I1006 02:48:46.283828 2379407 cache.go:162] opening:  /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W1006 02:48:46.320881 2379407 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1006 02:48:46.320957 2379407 cache.go:162] opening:  /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	W1006 02:48:46.324796 2379407 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1006 02:48:46.324886 2379407 cache.go:162] opening:  /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I1006 02:48:46.353022 2379407 cache.go:162] opening:  /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?I1006 02:48:46.508564 2379407 cache.go:157] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1006 02:48:46.508587 2379407 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 712.166803ms
	I1006 02:48:46.508600 2379407 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  1.50 MiB / 287.99 MiB [>_] 0.52% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  17.78 MiB / 287.99 MiB [>] 6.17% ? p/s ?I1006 02:48:46.914878 2379407 cache.go:157] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1006 02:48:46.914955 2379407 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 1.118246899s
	I1006 02:48:46.914984 2379407 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.20 MiB I1006 02:48:47.071507 2379407 cache.go:157] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1006 02:48:47.071532 2379407 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.275481223s
	I1006 02:48:47.071545 2379407 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.20 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.20 MiB I1006 02:48:47.602203 2379407 cache.go:157] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1006 02:48:47.602233 2379407 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.806778401s
	I1006 02:48:47.602246 2379407 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  26.25 MiB / 287.99 MiB  9.12% 40.44 MiB     > gcr.io/k8s-minikube/kicbase...:  42.41 MiB / 287.99 MiB  14.73% 40.44 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 40.44 MiBI1006 02:48:48.112720 2379407 cache.go:157] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1006 02:48:48.112753 2379407 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 2.31706201s
	I1006 02:48:48.113038 2379407 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  54.65 MiB / 287.99 MiB  18.98% 40.90 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 40.90 MiB    > gcr.io/k8s-minikube/kicbase...:  72.11 MiB / 287.99 MiB  25.04% 40.90 MiBI1006 02:48:48.727260 2379407 cache.go:157] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1006 02:48:48.727340 2379407 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.93109976s
	I1006 02:48:48.727368 2379407 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  83.79 MiB / 287.99 MiB  29.10% 41.39 MiB    > gcr.io/k8s-minikube/kicbase...:  98.13 MiB / 287.99 MiB  34.07% 41.39 MiB    > gcr.io/k8s-minikube/kicbase...:  113.03 MiB / 287.99 MiB  39.25% 41.39 Mi    > gcr.io/k8s-minikube/kicbase...:  126.61 MiB / 287.99 MiB  43.96% 43.30 Mi    > gcr.io/k8s-minikube/kicbase...:  149.04 MiB / 287.99 MiB  51.75% 43.30 Mi    > gcr.io/k8s-minikube/kicbase...:  170.87 MiB / 287.99 MiB  59.33% 43.30 Mi    > gcr.io/k8s-minikube/kicbase...:  171.91 MiB / 287.99 MiB  59.69% 45.41 Mi    > gcr.io/k8s-minikube/kicbase...:  187.90 MiB / 287.99 MiB  65.24% 45.41 Mi    > gcr.io/k8s-minikube/kicbase...:  204.08 MiB / 287.99 MiB  70.86% 45.41 Mi    > gcr.io/k8s-minikube/kicbase...:  209.73 MiB / 287.99 MiB  72.83% 46.54 Mi    > gcr.io/k8s-minikube/kicbase...:  230.32 MiB / 287.99 MiB  79.98% 46.54 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 46.54 Mi    > gcr.io/k8s-minikube/kicbase...:  252.73 MiB / 287.99 MiB  87.
76% 48.16 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 48.16 Mi    > gcr.io/k8s-minikube/kicbase...:  277.55 MiB / 287.99 MiB  96.38% 48.16 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 48.84 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 48.84 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 48.84 MiI1006 02:48:52.421375 2379407 cache.go:157] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1006 02:48:52.421434 2379407 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 6.625604025s
	I1006 02:48:52.421475 2379407 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1006 02:48:52.421496 2379407 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 45.69 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 45.69 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 45.69 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 42.75 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 42.75 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 42.01 MI1006 02:48:53.322931 2379407 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1006 02:48:53.322964 2379407 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1006 02:48:54.450830 2379407 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1006 02:48:54.450883 2379407 cache.go:195] Successfully downloaded all kic artifacts
	I1006 02:48:54.450946 2379407 start.go:365] acquiring machines lock for missing-upgrade-093801: {Name:mk98ed1fb1dd1e093dd237c75c8c0005f8ed5772 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:48:54.451041 2379407 start.go:369] acquired machines lock for "missing-upgrade-093801" in 64.51µs
	I1006 02:48:54.451089 2379407 start.go:96] Skipping create...Using existing machine configuration
	I1006 02:48:54.451104 2379407 fix.go:54] fixHost starting: 
	I1006 02:48:54.451436 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:48:54.480560 2379407 cli_runner.go:211] docker container inspect missing-upgrade-093801 --format={{.State.Status}} returned with exit code 1
	I1006 02:48:54.480618 2379407 fix.go:102] recreateIfNeeded on missing-upgrade-093801: state= err=unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:54.480643 2379407 fix.go:107] machineExists: false. err=machine does not exist
	I1006 02:48:54.483164 2379407 out.go:177] * docker "missing-upgrade-093801" container is missing, will recreate.
	I1006 02:48:54.484763 2379407 delete.go:124] DEMOLISHING missing-upgrade-093801 ...
	I1006 02:48:54.484881 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:48:54.510670 2379407 cli_runner.go:211] docker container inspect missing-upgrade-093801 --format={{.State.Status}} returned with exit code 1
	W1006 02:48:54.510740 2379407 stop.go:75] unable to get state: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:54.510764 2379407 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:54.511237 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:48:54.533788 2379407 cli_runner.go:211] docker container inspect missing-upgrade-093801 --format={{.State.Status}} returned with exit code 1
	I1006 02:48:54.533854 2379407 delete.go:82] Unable to get host status for missing-upgrade-093801, assuming it has already been deleted: state: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:54.533927 2379407 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-093801
	W1006 02:48:54.552756 2379407 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-093801 returned with exit code 1
	I1006 02:48:54.552792 2379407 kic.go:368] could not find the container missing-upgrade-093801 to remove it. will try anyways
	I1006 02:48:54.552856 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:48:54.570820 2379407 cli_runner.go:211] docker container inspect missing-upgrade-093801 --format={{.State.Status}} returned with exit code 1
	W1006 02:48:54.570877 2379407 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:54.570942 2379407 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-093801 /bin/bash -c "sudo init 0"
	W1006 02:48:54.593553 2379407 cli_runner.go:211] docker exec --privileged -t missing-upgrade-093801 /bin/bash -c "sudo init 0" returned with exit code 1
	I1006 02:48:54.593588 2379407 oci.go:650] error shutdown missing-upgrade-093801: docker exec --privileged -t missing-upgrade-093801 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:55.594594 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:48:55.624942 2379407 cli_runner.go:211] docker container inspect missing-upgrade-093801 --format={{.State.Status}} returned with exit code 1
	I1006 02:48:55.625028 2379407 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:55.625042 2379407 oci.go:664] temporary error: container missing-upgrade-093801 status is  but expect it to be exited
	I1006 02:48:55.625070 2379407 retry.go:31] will retry after 323.657314ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:55.949665 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:48:55.982147 2379407 cli_runner.go:211] docker container inspect missing-upgrade-093801 --format={{.State.Status}} returned with exit code 1
	I1006 02:48:55.982227 2379407 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:55.982238 2379407 oci.go:664] temporary error: container missing-upgrade-093801 status is  but expect it to be exited
	I1006 02:48:55.982267 2379407 retry.go:31] will retry after 925.545681ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:56.908300 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:48:56.929856 2379407 cli_runner.go:211] docker container inspect missing-upgrade-093801 --format={{.State.Status}} returned with exit code 1
	I1006 02:48:56.929916 2379407 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:56.929931 2379407 oci.go:664] temporary error: container missing-upgrade-093801 status is  but expect it to be exited
	I1006 02:48:56.929955 2379407 retry.go:31] will retry after 1.461612674s: couldn't verify container is exited. %v: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:58.392280 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:48:58.422071 2379407 cli_runner.go:211] docker container inspect missing-upgrade-093801 --format={{.State.Status}} returned with exit code 1
	I1006 02:48:58.422160 2379407 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:48:58.422177 2379407 oci.go:664] temporary error: container missing-upgrade-093801 status is  but expect it to be exited
	I1006 02:48:58.422207 2379407 retry.go:31] will retry after 2.389507713s: couldn't verify container is exited. %v: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:49:00.811911 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:49:00.840421 2379407 cli_runner.go:211] docker container inspect missing-upgrade-093801 --format={{.State.Status}} returned with exit code 1
	I1006 02:49:00.840481 2379407 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:49:00.840495 2379407 oci.go:664] temporary error: container missing-upgrade-093801 status is  but expect it to be exited
	I1006 02:49:00.840520 2379407 retry.go:31] will retry after 2.511508311s: couldn't verify container is exited. %v: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:49:03.352256 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:49:03.370342 2379407 cli_runner.go:211] docker container inspect missing-upgrade-093801 --format={{.State.Status}} returned with exit code 1
	I1006 02:49:03.370405 2379407 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:49:03.370420 2379407 oci.go:664] temporary error: container missing-upgrade-093801 status is  but expect it to be exited
	I1006 02:49:03.370446 2379407 retry.go:31] will retry after 5.421465678s: couldn't verify container is exited. %v: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:49:08.794464 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:49:08.811436 2379407 cli_runner.go:211] docker container inspect missing-upgrade-093801 --format={{.State.Status}} returned with exit code 1
	I1006 02:49:08.811499 2379407 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:49:08.811514 2379407 oci.go:664] temporary error: container missing-upgrade-093801 status is  but expect it to be exited
	I1006 02:49:08.811539 2379407 retry.go:31] will retry after 6.550468973s: couldn't verify container is exited. %v: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:49:15.365610 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:49:15.382160 2379407 cli_runner.go:211] docker container inspect missing-upgrade-093801 --format={{.State.Status}} returned with exit code 1
	I1006 02:49:15.382225 2379407 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	I1006 02:49:15.382239 2379407 oci.go:664] temporary error: container missing-upgrade-093801 status is  but expect it to be exited
	I1006 02:49:15.382283 2379407 oci.go:88] couldn't shut down missing-upgrade-093801 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-093801": docker container inspect missing-upgrade-093801 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-093801
	 
	I1006 02:49:15.382345 2379407 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-093801
	I1006 02:49:15.398461 2379407 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-093801
	W1006 02:49:15.415319 2379407 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-093801 returned with exit code 1
	I1006 02:49:15.415431 2379407 cli_runner.go:164] Run: docker network inspect missing-upgrade-093801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:49:15.433123 2379407 cli_runner.go:164] Run: docker network rm missing-upgrade-093801
	I1006 02:49:15.532338 2379407 fix.go:114] Sleeping 1 second for extra luck!
	I1006 02:49:16.532496 2379407 start.go:125] createHost starting for "" (driver="docker")
	I1006 02:49:16.535410 2379407 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1006 02:49:16.535558 2379407 start.go:159] libmachine.API.Create for "missing-upgrade-093801" (driver="docker")
	I1006 02:49:16.535584 2379407 client.go:168] LocalClient.Create starting
	I1006 02:49:16.535658 2379407 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem
	I1006 02:49:16.535693 2379407 main.go:141] libmachine: Decoding PEM data...
	I1006 02:49:16.535708 2379407 main.go:141] libmachine: Parsing certificate...
	I1006 02:49:16.535766 2379407 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem
	I1006 02:49:16.535785 2379407 main.go:141] libmachine: Decoding PEM data...
	I1006 02:49:16.535797 2379407 main.go:141] libmachine: Parsing certificate...
	I1006 02:49:16.536048 2379407 cli_runner.go:164] Run: docker network inspect missing-upgrade-093801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 02:49:16.553974 2379407 cli_runner.go:211] docker network inspect missing-upgrade-093801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 02:49:16.554056 2379407 network_create.go:281] running [docker network inspect missing-upgrade-093801] to gather additional debugging logs...
	I1006 02:49:16.554076 2379407 cli_runner.go:164] Run: docker network inspect missing-upgrade-093801
	W1006 02:49:16.575937 2379407 cli_runner.go:211] docker network inspect missing-upgrade-093801 returned with exit code 1
	I1006 02:49:16.575971 2379407 network_create.go:284] error running [docker network inspect missing-upgrade-093801]: docker network inspect missing-upgrade-093801: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-093801 not found
	I1006 02:49:16.575983 2379407 network_create.go:286] output of [docker network inspect missing-upgrade-093801]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-093801 not found
	
	** /stderr **
	I1006 02:49:16.576095 2379407 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:49:16.593271 2379407 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-23fd96ce330f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5d:0d:78:1a} reservation:<nil>}
	I1006 02:49:16.593611 2379407 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8cf15a65a1dd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:06:08:d3:35} reservation:<nil>}
	I1006 02:49:16.593946 2379407 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c9c03c6849b1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:17:3f:5a:4f} reservation:<nil>}
	I1006 02:49:16.594373 2379407 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002b77070}
	I1006 02:49:16.594397 2379407 network_create.go:124] attempt to create docker network missing-upgrade-093801 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1006 02:49:16.594458 2379407 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-093801 missing-upgrade-093801
	I1006 02:49:16.668347 2379407 network_create.go:108] docker network missing-upgrade-093801 192.168.76.0/24 created
	I1006 02:49:16.668377 2379407 kic.go:118] calculated static IP "192.168.76.2" for the "missing-upgrade-093801" container
	I1006 02:49:16.668465 2379407 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 02:49:16.686655 2379407 cli_runner.go:164] Run: docker volume create missing-upgrade-093801 --label name.minikube.sigs.k8s.io=missing-upgrade-093801 --label created_by.minikube.sigs.k8s.io=true
	I1006 02:49:16.704161 2379407 oci.go:103] Successfully created a docker volume missing-upgrade-093801
	I1006 02:49:16.704249 2379407 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-093801-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-093801 --entrypoint /usr/bin/test -v missing-upgrade-093801:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1006 02:49:18.313061 2379407 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-093801-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-093801 --entrypoint /usr/bin/test -v missing-upgrade-093801:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib: (1.608760228s)
	I1006 02:49:18.313089 2379407 oci.go:107] Successfully prepared a docker volume missing-upgrade-093801
	I1006 02:49:18.313117 2379407 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1006 02:49:18.313253 2379407 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 02:49:18.313354 2379407 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 02:49:18.384264 2379407 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-093801 --name missing-upgrade-093801 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-093801 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-093801 --network missing-upgrade-093801 --ip 192.168.76.2 --volume missing-upgrade-093801:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1006 02:49:18.740036 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Running}}
	I1006 02:49:18.766029 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	I1006 02:49:18.789616 2379407 cli_runner.go:164] Run: docker exec missing-upgrade-093801 stat /var/lib/dpkg/alternatives/iptables
	I1006 02:49:18.875574 2379407 oci.go:144] the created container "missing-upgrade-093801" has a running status.
	I1006 02:49:18.875598 2379407 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/missing-upgrade-093801/id_rsa...
	I1006 02:49:19.870549 2379407 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/missing-upgrade-093801/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 02:49:19.900806 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	I1006 02:49:19.927436 2379407 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 02:49:19.927457 2379407 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-093801 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 02:49:20.029177 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	I1006 02:49:20.061141 2379407 machine.go:88] provisioning docker machine ...
	I1006 02:49:20.061170 2379407 ubuntu.go:169] provisioning hostname "missing-upgrade-093801"
	I1006 02:49:20.061251 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:20.096429 2379407 main.go:141] libmachine: Using SSH client type: native
	I1006 02:49:20.096888 2379407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35433 <nil> <nil>}
	I1006 02:49:20.096902 2379407 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-093801 && echo "missing-upgrade-093801" | sudo tee /etc/hostname
	I1006 02:49:20.302528 2379407 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-093801
	
	I1006 02:49:20.302668 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:20.331414 2379407 main.go:141] libmachine: Using SSH client type: native
	I1006 02:49:20.332710 2379407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35433 <nil> <nil>}
	I1006 02:49:20.332741 2379407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-093801' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-093801/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-093801' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 02:49:20.484734 2379407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 02:49:20.484761 2379407 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17314-2262959/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-2262959/.minikube}
	I1006 02:49:20.484783 2379407 ubuntu.go:177] setting up certificates
	I1006 02:49:20.484793 2379407 provision.go:83] configureAuth start
	I1006 02:49:20.484854 2379407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-093801
	I1006 02:49:20.503715 2379407 provision.go:138] copyHostCerts
	I1006 02:49:20.503786 2379407 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem, removing ...
	I1006 02:49:20.503799 2379407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:49:20.503877 2379407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem (1675 bytes)
	I1006 02:49:20.503975 2379407 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem, removing ...
	I1006 02:49:20.503986 2379407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:49:20.504014 2379407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem (1082 bytes)
	I1006 02:49:20.504084 2379407 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem, removing ...
	I1006 02:49:20.504093 2379407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:49:20.504123 2379407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem (1123 bytes)
	I1006 02:49:20.504206 2379407 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-093801 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-093801]
	I1006 02:49:21.066290 2379407 provision.go:172] copyRemoteCerts
	I1006 02:49:21.066371 2379407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 02:49:21.066422 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:21.087807 2379407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35433 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/missing-upgrade-093801/id_rsa Username:docker}
	I1006 02:49:21.192829 2379407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 02:49:21.217104 2379407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1006 02:49:21.243821 2379407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 02:49:21.265732 2379407 provision.go:86] duration metric: configureAuth took 780.925036ms
	I1006 02:49:21.265762 2379407 ubuntu.go:193] setting minikube options for container-runtime
	I1006 02:49:21.265954 2379407 config.go:182] Loaded profile config "missing-upgrade-093801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1006 02:49:21.266058 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:21.296513 2379407 main.go:141] libmachine: Using SSH client type: native
	I1006 02:49:21.297040 2379407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35433 <nil> <nil>}
	I1006 02:49:21.297061 2379407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 02:49:21.747665 2379407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 02:49:21.747686 2379407 machine.go:91] provisioned docker machine in 1.68652669s
	I1006 02:49:21.747696 2379407 client.go:171] LocalClient.Create took 5.212106194s
	I1006 02:49:21.747706 2379407 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-093801" took 5.212148205s
	I1006 02:49:21.747714 2379407 start.go:300] post-start starting for "missing-upgrade-093801" (driver="docker")
	I1006 02:49:21.747723 2379407 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 02:49:21.747794 2379407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 02:49:21.747833 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:21.767997 2379407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35433 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/missing-upgrade-093801/id_rsa Username:docker}
	I1006 02:49:21.873108 2379407 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 02:49:21.876990 2379407 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1006 02:49:21.877018 2379407 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 02:49:21.877038 2379407 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1006 02:49:21.877064 2379407 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1006 02:49:21.877079 2379407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/addons for local assets ...
	I1006 02:49:21.877151 2379407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/files for local assets ...
	I1006 02:49:21.877229 2379407 filesync.go:149] local asset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> 22683062.pem in /etc/ssl/certs
	I1006 02:49:21.877338 2379407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 02:49:21.886350 2379407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:49:21.909781 2379407 start.go:303] post-start completed in 162.050946ms
	I1006 02:49:21.910178 2379407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-093801
	I1006 02:49:21.928553 2379407 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/missing-upgrade-093801/config.json ...
	I1006 02:49:21.928854 2379407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:49:21.928903 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:21.956684 2379407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35433 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/missing-upgrade-093801/id_rsa Username:docker}
	I1006 02:49:22.054325 2379407 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 02:49:22.060452 2379407 start.go:128] duration metric: createHost completed in 5.527918564s
	I1006 02:49:22.060549 2379407 cli_runner.go:164] Run: docker container inspect missing-upgrade-093801 --format={{.State.Status}}
	W1006 02:49:22.079012 2379407 fix.go:128] unexpected machine state, will restart: <nil>
	I1006 02:49:22.079073 2379407 machine.go:88] provisioning docker machine ...
	I1006 02:49:22.079092 2379407 ubuntu.go:169] provisioning hostname "missing-upgrade-093801"
	I1006 02:49:22.079159 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:22.101512 2379407 main.go:141] libmachine: Using SSH client type: native
	I1006 02:49:22.101932 2379407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35433 <nil> <nil>}
	I1006 02:49:22.101949 2379407 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-093801 && echo "missing-upgrade-093801" | sudo tee /etc/hostname
	I1006 02:49:22.255913 2379407 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-093801
	
	I1006 02:49:22.255993 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:22.274540 2379407 main.go:141] libmachine: Using SSH client type: native
	I1006 02:49:22.274948 2379407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35433 <nil> <nil>}
	I1006 02:49:22.274974 2379407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-093801' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-093801/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-093801' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 02:49:22.424569 2379407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 02:49:22.424597 2379407 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17314-2262959/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-2262959/.minikube}
	I1006 02:49:22.424615 2379407 ubuntu.go:177] setting up certificates
	I1006 02:49:22.424623 2379407 provision.go:83] configureAuth start
	I1006 02:49:22.424691 2379407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-093801
	I1006 02:49:22.457689 2379407 provision.go:138] copyHostCerts
	I1006 02:49:22.457757 2379407 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem, removing ...
	I1006 02:49:22.457772 2379407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:49:22.457848 2379407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem (1082 bytes)
	I1006 02:49:22.457948 2379407 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem, removing ...
	I1006 02:49:22.457959 2379407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:49:22.457987 2379407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem (1123 bytes)
	I1006 02:49:22.458047 2379407 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem, removing ...
	I1006 02:49:22.458058 2379407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:49:22.458086 2379407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem (1675 bytes)
	I1006 02:49:22.458133 2379407 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-093801 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-093801]
	I1006 02:49:22.970730 2379407 provision.go:172] copyRemoteCerts
	I1006 02:49:22.970798 2379407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 02:49:22.970840 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:22.995531 2379407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35433 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/missing-upgrade-093801/id_rsa Username:docker}
	I1006 02:49:23.096341 2379407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 02:49:23.119761 2379407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1006 02:49:23.144747 2379407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 02:49:23.167420 2379407 provision.go:86] duration metric: configureAuth took 742.78221ms
	I1006 02:49:23.167443 2379407 ubuntu.go:193] setting minikube options for container-runtime
	I1006 02:49:23.167674 2379407 config.go:182] Loaded profile config "missing-upgrade-093801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1006 02:49:23.167809 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:23.188149 2379407 main.go:141] libmachine: Using SSH client type: native
	I1006 02:49:23.188939 2379407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35433 <nil> <nil>}
	I1006 02:49:23.189109 2379407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 02:49:23.522846 2379407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 02:49:23.522877 2379407 machine.go:91] provisioned docker machine in 1.443795407s
	I1006 02:49:23.522889 2379407 start.go:300] post-start starting for "missing-upgrade-093801" (driver="docker")
	I1006 02:49:23.522901 2379407 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 02:49:23.522981 2379407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 02:49:23.523026 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:23.546430 2379407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35433 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/missing-upgrade-093801/id_rsa Username:docker}
	I1006 02:49:23.648509 2379407 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 02:49:23.652411 2379407 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1006 02:49:23.652445 2379407 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 02:49:23.652459 2379407 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1006 02:49:23.652467 2379407 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1006 02:49:23.652476 2379407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/addons for local assets ...
	I1006 02:49:23.652542 2379407 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/files for local assets ...
	I1006 02:49:23.652624 2379407 filesync.go:149] local asset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> 22683062.pem in /etc/ssl/certs
	I1006 02:49:23.652731 2379407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 02:49:23.661668 2379407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:49:23.686743 2379407 start.go:303] post-start completed in 163.837191ms
	I1006 02:49:23.686822 2379407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:49:23.686873 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:23.704566 2379407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35433 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/missing-upgrade-093801/id_rsa Username:docker}
	I1006 02:49:23.800976 2379407 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 02:49:23.806786 2379407 fix.go:56] fixHost completed within 29.355677951s
	I1006 02:49:23.806811 2379407 start.go:83] releasing machines lock for "missing-upgrade-093801", held for 29.3557373s
	I1006 02:49:23.806882 2379407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-093801
	I1006 02:49:23.825052 2379407 ssh_runner.go:195] Run: cat /version.json
	I1006 02:49:23.825106 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:23.825338 2379407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 02:49:23.825395 2379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-093801
	I1006 02:49:23.849095 2379407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35433 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/missing-upgrade-093801/id_rsa Username:docker}
	I1006 02:49:23.849232 2379407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35433 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/missing-upgrade-093801/id_rsa Username:docker}
	W1006 02:49:24.061567 2379407 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1006 02:49:24.061726 2379407 ssh_runner.go:195] Run: systemctl --version
	I1006 02:49:24.067516 2379407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 02:49:24.186263 2379407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 02:49:24.192830 2379407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:49:24.218668 2379407 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1006 02:49:24.218772 2379407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:49:24.254112 2379407 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 02:49:24.254182 2379407 start.go:472] detecting cgroup driver to use...
	I1006 02:49:24.254227 2379407 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1006 02:49:24.254303 2379407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 02:49:24.281048 2379407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 02:49:24.294821 2379407 docker.go:198] disabling cri-docker service (if available) ...
	I1006 02:49:24.294921 2379407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 02:49:24.306635 2379407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 02:49:24.318737 2379407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1006 02:49:24.332151 2379407 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1006 02:49:24.332222 2379407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 02:49:24.455917 2379407 docker.go:214] disabling docker service ...
	I1006 02:49:24.456020 2379407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 02:49:24.470532 2379407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 02:49:24.486896 2379407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 02:49:24.628928 2379407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 02:49:24.778569 2379407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 02:49:24.802808 2379407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 02:49:24.826414 2379407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1006 02:49:24.826491 2379407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:49:24.852989 2379407 out.go:177] 
	W1006 02:49:24.855430 2379407 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1006 02:49:24.855455 2379407 out.go:239] * 
	* 
	W1006 02:49:24.856384 2379407 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 02:49:24.858769 2379407 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-093801 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-10-06 02:49:24.933648477 +0000 UTC m=+2301.068326660
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-093801
helpers_test.go:235: (dbg) docker inspect missing-upgrade-093801:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1e1f775d219c3f8e1697992bae3be14bbc1def7adc9c6a611b9a5941256b330",
	        "Created": "2023-10-06T02:49:18.400594604Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2380518,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-06T02:49:18.731694285Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/b1e1f775d219c3f8e1697992bae3be14bbc1def7adc9c6a611b9a5941256b330/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1e1f775d219c3f8e1697992bae3be14bbc1def7adc9c6a611b9a5941256b330/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1e1f775d219c3f8e1697992bae3be14bbc1def7adc9c6a611b9a5941256b330/hosts",
	        "LogPath": "/var/lib/docker/containers/b1e1f775d219c3f8e1697992bae3be14bbc1def7adc9c6a611b9a5941256b330/b1e1f775d219c3f8e1697992bae3be14bbc1def7adc9c6a611b9a5941256b330-json.log",
	        "Name": "/missing-upgrade-093801",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-093801:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-093801",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c88863f6cc4a2c0cba135281d06c1bc8fa332a6a40c7343905993c6cb6b3bd6a-init/diff:/var/lib/docker/overlay2/fe62f95b947a2754fde5f883d4293a92f922b968fb4bd7f3654a2c7bdbc522f2/diff:/var/lib/docker/overlay2/11046de113c043eb3cc53cd5095b99a6192e45e43ff2bf27563a50c6b83509ac/diff:/var/lib/docker/overlay2/66e29f554f934e3041176fcf2587bcf4ba25b124079adb8f1547cc4de718c9df/diff:/var/lib/docker/overlay2/057007ae6daad8d784a1c10fb40b43c77d3d01565ed2d20eb8ecd202157f8c55/diff:/var/lib/docker/overlay2/f2912256efe331942aeb0729d132f576df35acacc791a93a62c5ca317b2bcf8e/diff:/var/lib/docker/overlay2/e2cd22e7e313277c859907c960e2cbe37d8a7378a8751a948ea4003acd61bd6d/diff:/var/lib/docker/overlay2/99dd6ddea4e90170f3ce60454ce71d27fb9733787e59844a90b1f0938652161a/diff:/var/lib/docker/overlay2/704d5b948caf53be47cf27d86a9689298a28a774534b248b7c9d3974e54c2b66/diff:/var/lib/docker/overlay2/a5fbb8d76b9f8b317e00d88f17e91126d413d47eba60d3660e0c3da1d8ed0afc/diff:/var/lib/docker/overlay2/b5b709
0da5d390fa7807765440ba718153e91b87f2c6384c0c16e5a0ffd9bc91/diff:/var/lib/docker/overlay2/fc16d884e053237d0cdc879e1c890bd6d12616ce175347461af04475eef61782/diff:/var/lib/docker/overlay2/2b235e1fc76b03c061790bd587390d49c3113f5250fd6ec516e9b6df1962dfd0/diff:/var/lib/docker/overlay2/c18ea963d54a407a70c2dd1741565e1de1733536263a94ead2696362b67d93c8/diff:/var/lib/docker/overlay2/1aec82aa59cab0b3cab30746fd525f4f021fdbb6ff4cc6f96837bccf40fea3bb/diff:/var/lib/docker/overlay2/4256fb7513050d910a7b9f050d9f93486c5ccef0a946f0b15d058540511f4be3/diff:/var/lib/docker/overlay2/c6d85f101c3f5dde457f99b03032c44f5ab2f02f223cf4a444c5a5adfa149686/diff:/var/lib/docker/overlay2/d10583c5bd3954a7caec9c7a12061715c4939275cb95a38f9b5de669e56dbd00/diff:/var/lib/docker/overlay2/d9945a6dc28d46858ff81789fe5385a1cf01d5323f3d50386075a8c260cc48e3/diff:/var/lib/docker/overlay2/200a5bb7f2f41747f2695d06d8ff06c3efd7dd349ea02cebbe16d3f1580b4d3c/diff:/var/lib/docker/overlay2/558fc29f8666f5e8da6ee4c15cbd8004e002a37b90dbdaf2b21b2cda6c2a666f/diff:/var/lib/d
ocker/overlay2/cb8dfe62af2bf77453cc4e816b99f40082f2907bef2b3463a904b24d9ed13bd8/diff:/var/lib/docker/overlay2/ce9e2f6c6fa0270c63c0da2c1de3fcbc1e9d73c922cbe11138038e571135816f/diff:/var/lib/docker/overlay2/cd96b2445ea3bedb2aff6d9f6318d786441c08c592cab4f8a46f07f3a2e23b7e/diff:/var/lib/docker/overlay2/a9ab47b71ce4760e9f9476dc2b08c681d19fd05b75201d809653ab453d0f60eb/diff:/var/lib/docker/overlay2/b4ac75486fc8671cbd029e9947de123242f9ebe9781f6ff28d2a188fa5bbe95f/diff:/var/lib/docker/overlay2/23617da6d19b39241dddaf6bdece5a24480950318f069945c733d7a097b07397/diff:/var/lib/docker/overlay2/e4498e8ebea8ccf6d777e2debda425b0615293205de3f693edd758b02a4f3ad2/diff:/var/lib/docker/overlay2/e5bb475d80b646bb1fac2e91489351b118f1e96a7ed3803a703bf10f7659fdaf/diff:/var/lib/docker/overlay2/16847f3a99ff3b08c73ed46dc7a841c95fcc038ed6720f4608a8d828f5fdad41/diff:/var/lib/docker/overlay2/0e218c144fed594ef4296ed914b73d8797be09cb8ceb5e290accd2d89b8d8159/diff:/var/lib/docker/overlay2/c8925453ec9345ea7907b21857cfe3a0b0af3e9dc0743b6a966993f1422
6e755/diff:/var/lib/docker/overlay2/aa88d3ef0c1e7f8a334917b0083f3b5198cd93f6d313f3eb7e3043e93e2dd744/diff:/var/lib/docker/overlay2/020332a642dc2104372528a4bed139d6196bcbcf33e1b2fcd7bfefde293de5fe/diff:/var/lib/docker/overlay2/fe5f9352e8cff247613f285de138597654a75694b70ae17ce1dcaa8dec52fa30/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c88863f6cc4a2c0cba135281d06c1bc8fa332a6a40c7343905993c6cb6b3bd6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c88863f6cc4a2c0cba135281d06c1bc8fa332a6a40c7343905993c6cb6b3bd6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c88863f6cc4a2c0cba135281d06c1bc8fa332a6a40c7343905993c6cb6b3bd6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-093801",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-093801/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-093801",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-093801",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-093801",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "320bcf1de7162b3c692510fa65b261f0259554d314d2f629da8399ee27dcfbf3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35432"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35429"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35431"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35430"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/320bcf1de716",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-093801": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b1e1f775d219",
	                        "missing-upgrade-093801"
	                    ],
	                    "NetworkID": "68ae88e7a216ffa56ea1d9cd13e3dc4e524326a972b8c0446973f3129e279f5a",
	                    "EndpointID": "1f1400f31d00bb0791b8997f8ebbbf15e320d7d4748c625bea4deb75e39f0e91",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-093801 -n missing-upgrade-093801
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-093801 -n missing-upgrade-093801: exit status 6 (504.832358ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 02:49:25.457927 2381526 status.go:415] kubeconfig endpoint: got: 192.168.59.106:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-093801" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-093801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-093801
E1006 02:49:27.706543 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-093801: (2.519715905s)
--- FAIL: TestMissingContainerUpgrade (146.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (74.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.1259185839.exe start -p stopped-upgrade-670887 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.1259185839.exe start -p stopped-upgrade-670887 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m4.647830499s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.1259185839.exe -p stopped-upgrade-670887 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.1259185839.exe -p stopped-upgrade-670887 stop: (2.63245746s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-670887 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-670887 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.918565238s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-670887] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-670887 in cluster stopped-upgrade-670887
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-670887" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 02:50:36.451212 2388166 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:50:36.451406 2388166 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:50:36.451420 2388166 out.go:309] Setting ErrFile to fd 2...
	I1006 02:50:36.451427 2388166 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:50:36.451696 2388166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:50:36.452064 2388166 out.go:303] Setting JSON to false
	I1006 02:50:36.453189 2388166 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":45183,"bootTime":1696515454,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:50:36.453264 2388166 start.go:138] virtualization:  
	I1006 02:50:36.459785 2388166 out.go:177] * [stopped-upgrade-670887] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1006 02:50:36.462571 2388166 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 02:50:36.464508 2388166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:50:36.462736 2388166 notify.go:220] Checking for updates...
	I1006 02:50:36.468637 2388166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:50:36.470766 2388166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:50:36.472715 2388166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 02:50:36.474434 2388166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 02:50:36.476747 2388166 config.go:182] Loaded profile config "stopped-upgrade-670887": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1006 02:50:36.479225 2388166 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1006 02:50:36.481001 2388166 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:50:36.526920 2388166 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:50:36.527012 2388166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:50:36.694512 2388166 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-06 02:50:36.681256688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:50:36.694644 2388166 docker.go:295] overlay module found
	I1006 02:50:36.696918 2388166 out.go:177] * Using the docker driver based on existing profile
	I1006 02:50:36.699099 2388166 start.go:298] selected driver: docker
	I1006 02:50:36.699125 2388166 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-670887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-670887 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.168 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1006 02:50:36.699235 2388166 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 02:50:36.699877 2388166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:50:36.816928 2388166 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-06 02:50:36.804794593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:50:36.817248 2388166 cni.go:84] Creating CNI manager for ""
	I1006 02:50:36.817266 2388166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:50:36.817279 2388166 start_flags.go:323] config:
	{Name:stopped-upgrade-670887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-670887 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.168 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1006 02:50:36.819499 2388166 out.go:177] * Starting control plane node stopped-upgrade-670887 in cluster stopped-upgrade-670887
	I1006 02:50:36.821316 2388166 cache.go:122] Beginning downloading kic base image for docker with crio
	I1006 02:50:36.824773 2388166 out.go:177] * Pulling base image ...
	I1006 02:50:36.826533 2388166 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1006 02:50:36.826617 2388166 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1006 02:50:36.863791 2388166 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1006 02:50:36.863812 2388166 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1006 02:50:36.896254 2388166 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1006 02:50:36.896396 2388166 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/stopped-upgrade-670887/config.json ...
	I1006 02:50:36.896638 2388166 cache.go:195] Successfully downloaded all kic artifacts
	I1006 02:50:36.896673 2388166 start.go:365] acquiring machines lock for stopped-upgrade-670887: {Name:mk6cf48b456f21b0747bf8fc30dd28efaa186871 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:50:36.896726 2388166 start.go:369] acquired machines lock for "stopped-upgrade-670887" in 31.262µs
	I1006 02:50:36.896739 2388166 start.go:96] Skipping create...Using existing machine configuration
	I1006 02:50:36.896744 2388166 fix.go:54] fixHost starting: 
	I1006 02:50:36.897015 2388166 cli_runner.go:164] Run: docker container inspect stopped-upgrade-670887 --format={{.State.Status}}
	I1006 02:50:36.897353 2388166 cache.go:107] acquiring lock: {Name:mkebee88fce238ff0e7e787ed96d7d7331a3727b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:50:36.897416 2388166 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1006 02:50:36.897425 2388166 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 77.056µs
	I1006 02:50:36.897433 2388166 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1006 02:50:36.897443 2388166 cache.go:107] acquiring lock: {Name:mk46ae56fe46f5390180d521a6cc721035e56a8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:50:36.897481 2388166 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1006 02:50:36.897487 2388166 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 45.112µs
	I1006 02:50:36.897496 2388166 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1006 02:50:36.897505 2388166 cache.go:107] acquiring lock: {Name:mk86e13e6b3b437bb7e26c6b60be830b430c6e39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:50:36.897530 2388166 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1006 02:50:36.897535 2388166 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.516µs
	I1006 02:50:36.897542 2388166 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1006 02:50:36.897550 2388166 cache.go:107] acquiring lock: {Name:mkd5603e96a2ebfc7e6761dedb6f700bf1e3e05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:50:36.897575 2388166 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1006 02:50:36.897579 2388166 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 30.015µs
	I1006 02:50:36.897585 2388166 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1006 02:50:36.897597 2388166 cache.go:107] acquiring lock: {Name:mk9386f4a4cab443db6cd71364319df13794a376 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:50:36.897620 2388166 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1006 02:50:36.897625 2388166 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 29.375µs
	I1006 02:50:36.897631 2388166 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1006 02:50:36.897639 2388166 cache.go:107] acquiring lock: {Name:mk2d20f834ef30a07ef1d766d525a500dd7188a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:50:36.897662 2388166 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1006 02:50:36.897668 2388166 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 29.498µs
	I1006 02:50:36.897673 2388166 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1006 02:50:36.897681 2388166 cache.go:107] acquiring lock: {Name:mk543ec0afb71de658a5f310eba68d04a247c52f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:50:36.897707 2388166 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1006 02:50:36.897711 2388166 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 31.131µs
	I1006 02:50:36.897717 2388166 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1006 02:50:36.897725 2388166 cache.go:107] acquiring lock: {Name:mk1d388e71632749d28243135d046b61853f0ee4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:50:36.897747 2388166 cache.go:115] /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1006 02:50:36.897752 2388166 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 27.586µs
	I1006 02:50:36.897757 2388166 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1006 02:50:36.897763 2388166 cache.go:87] Successfully saved all images to host disk.
	I1006 02:50:36.926324 2388166 fix.go:102] recreateIfNeeded on stopped-upgrade-670887: state=Stopped err=<nil>
	W1006 02:50:36.926351 2388166 fix.go:128] unexpected machine state, will restart: <nil>
	I1006 02:50:36.928807 2388166 out.go:177] * Restarting existing docker container for "stopped-upgrade-670887" ...
	I1006 02:50:36.931269 2388166 cli_runner.go:164] Run: docker start stopped-upgrade-670887
	I1006 02:50:37.445958 2388166 cli_runner.go:164] Run: docker container inspect stopped-upgrade-670887 --format={{.State.Status}}
	I1006 02:50:37.475841 2388166 kic.go:427] container "stopped-upgrade-670887" state is running.
	I1006 02:50:37.476253 2388166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-670887
	I1006 02:50:37.507537 2388166 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/stopped-upgrade-670887/config.json ...
	I1006 02:50:37.507762 2388166 machine.go:88] provisioning docker machine ...
	I1006 02:50:37.507776 2388166 ubuntu.go:169] provisioning hostname "stopped-upgrade-670887"
	I1006 02:50:37.507829 2388166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-670887
	I1006 02:50:37.543029 2388166 main.go:141] libmachine: Using SSH client type: native
	I1006 02:50:37.543471 2388166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35446 <nil> <nil>}
	I1006 02:50:37.543484 2388166 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-670887 && echo "stopped-upgrade-670887" | sudo tee /etc/hostname
	I1006 02:50:37.546811 2388166 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1006 02:50:40.703920 2388166 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-670887
	
	I1006 02:50:40.704000 2388166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-670887
	I1006 02:50:40.723551 2388166 main.go:141] libmachine: Using SSH client type: native
	I1006 02:50:40.723954 2388166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35446 <nil> <nil>}
	I1006 02:50:40.723980 2388166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-670887' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-670887/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-670887' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 02:50:40.868941 2388166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 02:50:40.869011 2388166 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17314-2262959/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-2262959/.minikube}
	I1006 02:50:40.869057 2388166 ubuntu.go:177] setting up certificates
	I1006 02:50:40.869103 2388166 provision.go:83] configureAuth start
	I1006 02:50:40.869200 2388166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-670887
	I1006 02:50:40.888757 2388166 provision.go:138] copyHostCerts
	I1006 02:50:40.888832 2388166 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem, removing ...
	I1006 02:50:40.888841 2388166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:50:40.888918 2388166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem (1082 bytes)
	I1006 02:50:40.889031 2388166 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem, removing ...
	I1006 02:50:40.889038 2388166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:50:40.889074 2388166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem (1123 bytes)
	I1006 02:50:40.889173 2388166 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem, removing ...
	I1006 02:50:40.889178 2388166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:50:40.889216 2388166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem (1675 bytes)
	I1006 02:50:40.889645 2388166 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-670887 san=[192.168.59.168 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-670887]
	I1006 02:50:41.232503 2388166 provision.go:172] copyRemoteCerts
	I1006 02:50:41.232572 2388166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 02:50:41.232624 2388166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-670887
	I1006 02:50:41.253761 2388166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35446 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/stopped-upgrade-670887/id_rsa Username:docker}
	I1006 02:50:41.356211 2388166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 02:50:41.383719 2388166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 02:50:41.407185 2388166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1006 02:50:41.431796 2388166 provision.go:86] duration metric: configureAuth took 562.666614ms
	I1006 02:50:41.431820 2388166 ubuntu.go:193] setting minikube options for container-runtime
	I1006 02:50:41.431995 2388166 config.go:182] Loaded profile config "stopped-upgrade-670887": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1006 02:50:41.432096 2388166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-670887
	I1006 02:50:41.457938 2388166 main.go:141] libmachine: Using SSH client type: native
	I1006 02:50:41.458342 2388166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35446 <nil> <nil>}
	I1006 02:50:41.458364 2388166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 02:50:41.871783 2388166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 02:50:41.871804 2388166 machine.go:91] provisioned docker machine in 4.364032423s
	I1006 02:50:41.871822 2388166 start.go:300] post-start starting for "stopped-upgrade-670887" (driver="docker")
	I1006 02:50:41.871839 2388166 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 02:50:41.871917 2388166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 02:50:41.871966 2388166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-670887
	I1006 02:50:41.893562 2388166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35446 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/stopped-upgrade-670887/id_rsa Username:docker}
	I1006 02:50:41.996087 2388166 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 02:50:42.000696 2388166 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1006 02:50:42.000728 2388166 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 02:50:42.000740 2388166 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1006 02:50:42.000756 2388166 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1006 02:50:42.000773 2388166 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/addons for local assets ...
	I1006 02:50:42.000840 2388166 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/files for local assets ...
	I1006 02:50:42.000929 2388166 filesync.go:149] local asset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> 22683062.pem in /etc/ssl/certs
	I1006 02:50:42.001049 2388166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 02:50:42.011152 2388166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:50:42.035986 2388166 start.go:303] post-start completed in 164.147658ms
	I1006 02:50:42.036065 2388166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:50:42.036111 2388166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-670887
	I1006 02:50:42.058104 2388166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35446 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/stopped-upgrade-670887/id_rsa Username:docker}
	I1006 02:50:42.171113 2388166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 02:50:42.177615 2388166 fix.go:56] fixHost completed within 5.280858653s
	I1006 02:50:42.177697 2388166 start.go:83] releasing machines lock for "stopped-upgrade-670887", held for 5.280961563s
	I1006 02:50:42.177798 2388166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-670887
	I1006 02:50:42.201606 2388166 ssh_runner.go:195] Run: cat /version.json
	I1006 02:50:42.201673 2388166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-670887
	I1006 02:50:42.201613 2388166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 02:50:42.201793 2388166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-670887
	I1006 02:50:42.225858 2388166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35446 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/stopped-upgrade-670887/id_rsa Username:docker}
	I1006 02:50:42.240839 2388166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35446 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/stopped-upgrade-670887/id_rsa Username:docker}
	W1006 02:50:42.425762 2388166 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1006 02:50:42.425851 2388166 ssh_runner.go:195] Run: systemctl --version
	I1006 02:50:42.431816 2388166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 02:50:42.690684 2388166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 02:50:42.696717 2388166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:50:42.724805 2388166 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1006 02:50:42.724887 2388166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:50:42.752878 2388166 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 02:50:42.752903 2388166 start.go:472] detecting cgroup driver to use...
	I1006 02:50:42.752932 2388166 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1006 02:50:42.752981 2388166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 02:50:42.782455 2388166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 02:50:42.794616 2388166 docker.go:198] disabling cri-docker service (if available) ...
	I1006 02:50:42.794713 2388166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 02:50:42.807353 2388166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 02:50:42.819757 2388166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1006 02:50:42.832603 2388166 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1006 02:50:42.832679 2388166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 02:50:42.929628 2388166 docker.go:214] disabling docker service ...
	I1006 02:50:42.929735 2388166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 02:50:42.943891 2388166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 02:50:42.956184 2388166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 02:50:43.061296 2388166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 02:50:43.180708 2388166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 02:50:43.194665 2388166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 02:50:43.212148 2388166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1006 02:50:43.212248 2388166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:50:43.226657 2388166 out.go:177] 
	W1006 02:50:43.229168 2388166 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1006 02:50:43.229190 2388166 out.go:239] * 
	* 
	W1006 02:50:43.230782 2388166 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 02:50:43.232841 2388166 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-670887 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (74.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (90.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-647181 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1006 02:52:36.988991 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-647181 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m22.210564768s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-647181] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-647181 in cluster pause-647181
	* Pulling base image ...
	* Updating the running docker "pause-647181" container ...
	* Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-647181" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 02:52:35.635117 2399191 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:52:35.635341 2399191 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:52:35.635347 2399191 out.go:309] Setting ErrFile to fd 2...
	I1006 02:52:35.635358 2399191 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:52:35.635627 2399191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:52:35.636002 2399191 out.go:303] Setting JSON to false
	I1006 02:52:35.637226 2399191 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":45302,"bootTime":1696515454,"procs":365,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:52:35.637304 2399191 start.go:138] virtualization:  
	I1006 02:52:35.642285 2399191 out.go:177] * [pause-647181] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1006 02:52:35.645393 2399191 notify.go:220] Checking for updates...
	I1006 02:52:35.646423 2399191 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 02:52:35.650214 2399191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:52:35.652643 2399191 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:52:35.660766 2399191 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:52:35.662957 2399191 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 02:52:35.665612 2399191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 02:52:35.668136 2399191 config.go:182] Loaded profile config "pause-647181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:52:35.668739 2399191 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:52:35.700119 2399191 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:52:35.700246 2399191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:52:35.829052 2399191 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-06 02:52:35.81717301 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:52:35.829174 2399191 docker.go:295] overlay module found
	I1006 02:52:35.831820 2399191 out.go:177] * Using the docker driver based on existing profile
	I1006 02:52:35.835327 2399191 start.go:298] selected driver: docker
	I1006 02:52:35.835351 2399191 start.go:902] validating driver "docker" against &{Name:pause-647181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-647181 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:52:35.835505 2399191 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 02:52:35.835634 2399191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:52:35.999856 2399191 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-06 02:52:35.985898653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:52:36.000371 2399191 cni.go:84] Creating CNI manager for ""
	I1006 02:52:36.000395 2399191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:52:36.000410 2399191 start_flags.go:323] config:
	{Name:pause-647181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-647181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false s
torage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:52:36.006027 2399191 out.go:177] * Starting control plane node pause-647181 in cluster pause-647181
	I1006 02:52:36.008162 2399191 cache.go:122] Beginning downloading kic base image for docker with crio
	I1006 02:52:36.010556 2399191 out.go:177] * Pulling base image ...
	I1006 02:52:36.013014 2399191 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:52:36.013081 2399191 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1006 02:52:36.013096 2399191 cache.go:57] Caching tarball of preloaded images
	I1006 02:52:36.013210 2399191 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1006 02:52:36.013495 2399191 preload.go:174] Found /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 02:52:36.013511 2399191 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1006 02:52:36.013647 2399191 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/config.json ...
	I1006 02:52:36.058912 2399191 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1006 02:52:36.058938 2399191 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1006 02:52:36.058959 2399191 cache.go:195] Successfully downloaded all kic artifacts
	I1006 02:52:36.059000 2399191 start.go:365] acquiring machines lock for pause-647181: {Name:mkc7407948d409886cff9a6226b5e291afc1f982 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:52:36.059091 2399191 start.go:369] acquired machines lock for "pause-647181" in 60.079µs
	I1006 02:52:36.059116 2399191 start.go:96] Skipping create...Using existing machine configuration
	I1006 02:52:36.059127 2399191 fix.go:54] fixHost starting: 
	I1006 02:52:36.059418 2399191 cli_runner.go:164] Run: docker container inspect pause-647181 --format={{.State.Status}}
	I1006 02:52:36.082107 2399191 fix.go:102] recreateIfNeeded on pause-647181: state=Running err=<nil>
	W1006 02:52:36.082158 2399191 fix.go:128] unexpected machine state, will restart: <nil>
	I1006 02:52:36.084640 2399191 out.go:177] * Updating the running docker "pause-647181" container ...
	I1006 02:52:36.086828 2399191 machine.go:88] provisioning docker machine ...
	I1006 02:52:36.086867 2399191 ubuntu.go:169] provisioning hostname "pause-647181"
	I1006 02:52:36.086963 2399191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-647181
	I1006 02:52:36.124286 2399191 main.go:141] libmachine: Using SSH client type: native
	I1006 02:52:36.124702 2399191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35455 <nil> <nil>}
	I1006 02:52:36.124726 2399191 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-647181 && echo "pause-647181" | sudo tee /etc/hostname
	I1006 02:52:36.294934 2399191 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-647181
	
	I1006 02:52:36.295014 2399191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-647181
	I1006 02:52:36.322933 2399191 main.go:141] libmachine: Using SSH client type: native
	I1006 02:52:36.323351 2399191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35455 <nil> <nil>}
	I1006 02:52:36.323375 2399191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-647181' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-647181/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-647181' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 02:52:36.468966 2399191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 02:52:36.468989 2399191 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17314-2262959/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-2262959/.minikube}
	I1006 02:52:36.469008 2399191 ubuntu.go:177] setting up certificates
	I1006 02:52:36.469017 2399191 provision.go:83] configureAuth start
	I1006 02:52:36.469082 2399191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-647181
	I1006 02:52:36.502937 2399191 provision.go:138] copyHostCerts
	I1006 02:52:36.503006 2399191 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem, removing ...
	I1006 02:52:36.503015 2399191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:52:36.503165 2399191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem (1082 bytes)
	I1006 02:52:36.503271 2399191 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem, removing ...
	I1006 02:52:36.503278 2399191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:52:36.503306 2399191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem (1123 bytes)
	I1006 02:52:36.503354 2399191 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem, removing ...
	I1006 02:52:36.503359 2399191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:52:36.503382 2399191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem (1675 bytes)
	I1006 02:52:36.503422 2399191 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem org=jenkins.pause-647181 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-647181]
	I1006 02:52:36.771255 2399191 provision.go:172] copyRemoteCerts
	I1006 02:52:36.771331 2399191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 02:52:36.771378 2399191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-647181
	I1006 02:52:36.799921 2399191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35455 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/pause-647181/id_rsa Username:docker}
	I1006 02:52:36.902047 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 02:52:36.951639 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1006 02:52:37.007772 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 02:52:37.052306 2399191 provision.go:86] duration metric: configureAuth took 583.274559ms
	I1006 02:52:37.052333 2399191 ubuntu.go:193] setting minikube options for container-runtime
	I1006 02:52:37.052566 2399191 config.go:182] Loaded profile config "pause-647181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:52:37.052675 2399191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-647181
	I1006 02:52:37.074884 2399191 main.go:141] libmachine: Using SSH client type: native
	I1006 02:52:37.075358 2399191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35455 <nil> <nil>}
	I1006 02:52:37.075375 2399191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 02:52:42.551599 2399191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 02:52:42.551619 2399191 machine.go:91] provisioned docker machine in 6.464769203s
	I1006 02:52:42.551630 2399191 start.go:300] post-start starting for "pause-647181" (driver="docker")
	I1006 02:52:42.551640 2399191 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 02:52:42.551703 2399191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 02:52:42.551745 2399191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-647181
	I1006 02:52:42.583219 2399191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35455 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/pause-647181/id_rsa Username:docker}
	I1006 02:52:42.683219 2399191 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 02:52:42.688367 2399191 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 02:52:42.688410 2399191 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1006 02:52:42.688422 2399191 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1006 02:52:42.688429 2399191 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1006 02:52:42.688453 2399191 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/addons for local assets ...
	I1006 02:52:42.688508 2399191 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/files for local assets ...
	I1006 02:52:42.688597 2399191 filesync.go:149] local asset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> 22683062.pem in /etc/ssl/certs
	I1006 02:52:42.688707 2399191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 02:52:42.700655 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:52:42.734653 2399191 start.go:303] post-start completed in 183.008339ms
	I1006 02:52:42.734736 2399191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:52:42.734781 2399191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-647181
	I1006 02:52:42.766952 2399191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35455 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/pause-647181/id_rsa Username:docker}
	I1006 02:52:42.865355 2399191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 02:52:42.876305 2399191 fix.go:56] fixHost completed within 6.817169523s
	I1006 02:52:42.876333 2399191 start.go:83] releasing machines lock for "pause-647181", held for 6.817228379s
	I1006 02:52:42.876412 2399191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-647181
	I1006 02:52:42.896369 2399191 ssh_runner.go:195] Run: cat /version.json
	I1006 02:52:42.896420 2399191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-647181
	I1006 02:52:42.896629 2399191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 02:52:42.896667 2399191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-647181
	I1006 02:52:42.928638 2399191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35455 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/pause-647181/id_rsa Username:docker}
	I1006 02:52:42.932619 2399191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35455 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/pause-647181/id_rsa Username:docker}
	I1006 02:52:43.168289 2399191 ssh_runner.go:195] Run: systemctl --version
	I1006 02:52:43.175774 2399191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 02:52:43.350024 2399191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 02:52:43.355644 2399191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:52:43.367544 2399191 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1006 02:52:43.367626 2399191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:52:43.378909 2399191 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 02:52:43.378930 2399191 start.go:472] detecting cgroup driver to use...
	I1006 02:52:43.378961 2399191 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1006 02:52:43.379010 2399191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 02:52:43.394931 2399191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 02:52:43.409906 2399191 docker.go:198] disabling cri-docker service (if available) ...
	I1006 02:52:43.409964 2399191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 02:52:43.426851 2399191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 02:52:43.441919 2399191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 02:52:43.602928 2399191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 02:52:43.765045 2399191 docker.go:214] disabling docker service ...
	I1006 02:52:43.765115 2399191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 02:52:43.782236 2399191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 02:52:43.797369 2399191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 02:52:44.050812 2399191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 02:52:44.441157 2399191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 02:52:44.473206 2399191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 02:52:44.546372 2399191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1006 02:52:44.546439 2399191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:52:44.595609 2399191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 02:52:44.595675 2399191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:52:44.639366 2399191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:52:44.699009 2399191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:52:44.777959 2399191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 02:52:44.863536 2399191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 02:52:44.927731 2399191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 02:52:44.991286 2399191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 02:52:45.301743 2399191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 02:52:45.756498 2399191 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 02:52:45.756570 2399191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 02:52:45.761920 2399191 start.go:540] Will wait 60s for crictl version
	I1006 02:52:45.761980 2399191 ssh_runner.go:195] Run: which crictl
	I1006 02:52:45.774602 2399191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 02:52:45.835617 2399191 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1006 02:52:45.835790 2399191 ssh_runner.go:195] Run: crio --version
	I1006 02:52:45.900440 2399191 ssh_runner.go:195] Run: crio --version
	I1006 02:52:45.995513 2399191 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1006 02:52:45.998485 2399191 cli_runner.go:164] Run: docker network inspect pause-647181 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:52:46.060667 2399191 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1006 02:52:46.073355 2399191 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:52:46.073417 2399191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 02:52:46.174936 2399191 crio.go:496] all images are preloaded for cri-o runtime.
	I1006 02:52:46.174956 2399191 crio.go:415] Images already preloaded, skipping extraction
	I1006 02:52:46.175012 2399191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 02:52:46.254409 2399191 crio.go:496] all images are preloaded for cri-o runtime.
	I1006 02:52:46.254433 2399191 cache_images.go:84] Images are preloaded, skipping loading
	I1006 02:52:46.254515 2399191 ssh_runner.go:195] Run: crio config
	I1006 02:52:46.347078 2399191 cni.go:84] Creating CNI manager for ""
	I1006 02:52:46.347101 2399191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:52:46.347122 2399191 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1006 02:52:46.347149 2399191 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-647181 NodeName:pause-647181 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 02:52:46.347286 2399191 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-647181"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 02:52:46.347357 2399191 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-647181 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:pause-647181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1006 02:52:46.347422 2399191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1006 02:52:46.358739 2399191 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 02:52:46.358807 2399191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 02:52:46.369358 2399191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I1006 02:52:46.393011 2399191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 02:52:46.418144 2399191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I1006 02:52:46.441605 2399191 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1006 02:52:46.447107 2399191 certs.go:56] Setting up /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181 for IP: 192.168.67.2
	I1006 02:52:46.447139 2399191 certs.go:190] acquiring lock for shared ca certs: {Name:mk4532656c287f04abff160cc5263fddcb69ac4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:52:46.447292 2399191 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key
	I1006 02:52:46.447358 2399191 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key
	I1006 02:52:46.447445 2399191 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.key
	I1006 02:52:46.447529 2399191 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/apiserver.key.c7fa3a9e
	I1006 02:52:46.447608 2399191 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/proxy-client.key
	I1006 02:52:46.447747 2399191 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem (1338 bytes)
	W1006 02:52:46.447788 2399191 certs.go:433] ignoring /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306_empty.pem, impossibly tiny 0 bytes
	I1006 02:52:46.447802 2399191 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 02:52:46.447833 2399191 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem (1082 bytes)
	I1006 02:52:46.447871 2399191 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem (1123 bytes)
	I1006 02:52:46.447898 2399191 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem (1675 bytes)
	I1006 02:52:46.447985 2399191 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:52:46.449039 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1006 02:52:46.487141 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 02:52:46.525523 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 02:52:46.563878 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 02:52:46.595186 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 02:52:46.627515 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 02:52:46.662810 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 02:52:46.694569 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 02:52:46.726503 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /usr/share/ca-certificates/22683062.pem (1708 bytes)
	I1006 02:52:46.762397 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 02:52:46.792461 2399191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem --> /usr/share/ca-certificates/2268306.pem (1338 bytes)
	I1006 02:52:46.821636 2399191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 02:52:46.845622 2399191 ssh_runner.go:195] Run: openssl version
	I1006 02:52:46.853793 2399191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22683062.pem && ln -fs /usr/share/ca-certificates/22683062.pem /etc/ssl/certs/22683062.pem"
	I1006 02:52:46.865967 2399191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22683062.pem
	I1006 02:52:46.871679 2399191 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  6 02:19 /usr/share/ca-certificates/22683062.pem
	I1006 02:52:46.871750 2399191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22683062.pem
	I1006 02:52:46.880793 2399191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22683062.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 02:52:46.892097 2399191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 02:52:46.905126 2399191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:52:46.910148 2399191 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  6 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:52:46.910211 2399191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:52:46.919014 2399191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 02:52:46.931391 2399191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2268306.pem && ln -fs /usr/share/ca-certificates/2268306.pem /etc/ssl/certs/2268306.pem"
	I1006 02:52:46.944593 2399191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2268306.pem
	I1006 02:52:46.950061 2399191 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  6 02:19 /usr/share/ca-certificates/2268306.pem
	I1006 02:52:46.950124 2399191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2268306.pem
	I1006 02:52:46.959432 2399191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2268306.pem /etc/ssl/certs/51391683.0"
	I1006 02:52:46.971480 2399191 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1006 02:52:46.976837 2399191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 02:52:46.986324 2399191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 02:52:46.996238 2399191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 02:52:47.006551 2399191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 02:52:47.017221 2399191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 02:52:47.028048 2399191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 02:52:47.038082 2399191 kubeadm.go:404] StartCluster: {Name:pause-647181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-647181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-al
iases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:52:47.038256 2399191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 02:52:47.038334 2399191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 02:52:47.097555 2399191 cri.go:89] found id: "37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa"
	I1006 02:52:47.097584 2399191 cri.go:89] found id: "2f12f4e73c2c0690b411f1285a9c8db67b9f0cd21fb0c9c14e89c87dacc4493c"
	I1006 02:52:47.097594 2399191 cri.go:89] found id: "af8a4b2970a3b84613bcbbf6aa06501a55e3edeaf7c0178210567318f9ef5c33"
	I1006 02:52:47.097599 2399191 cri.go:89] found id: "25c027ea18e008721a821f76d629a473c19fdb4f16572e2a21450edbeabf3dbb"
	I1006 02:52:47.097604 2399191 cri.go:89] found id: "b5235f6fbd983d9731de29a51cc1cc5b32fa9570cca0eeb69751456dc516d48a"
	I1006 02:52:47.097611 2399191 cri.go:89] found id: "b5847f3e0cfe5982e77de2ff5d2a07618e29d8557f0dabd89b493f9fe0d7f735"
	I1006 02:52:47.097619 2399191 cri.go:89] found id: "93d60293b1fb4af99d9aabce1118e6f119d74b926a27ed829df130a84e190980"
	I1006 02:52:47.097626 2399191 cri.go:89] found id: ""
	I1006 02:52:47.097701 2399191 ssh_runner.go:195] Run: sudo runc list -f json
	I1006 02:52:47.134809 2399191 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"25c027ea18e008721a821f76d629a473c19fdb4f16572e2a21450edbeabf3dbb","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/25c027ea18e008721a821f76d629a473c19fdb4f16572e2a21450edbeabf3dbb/userdata","rootfs":"/var/lib/containers/storage/overlay/e6d334c33e0978bac85ebeb5d93691d4f7ae9dd00c2524ddd9234feb72941ecb/merged","created":"2023-10-06T02:52:44.724946837Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a53e8561","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a53e8561\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMes
sagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"25c027ea18e008721a821f76d629a473c19fdb4f16572e2a21450edbeabf3dbb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-06T02:52:44.074306456Z","io.kubernetes.cri-o.Image":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.2","io.kubernetes.cri-o.ImageRef":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-9vvq2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"eec193c3-c96f-43e8-a0c3-e0964c0c7b51\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-9vvq2_eec193c3-c96f-43e8-a0c3-e0964c0c7b51/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPo
int":"/var/lib/containers/storage/overlay/e6d334c33e0978bac85ebeb5d93691d4f7ae9dd00c2524ddd9234feb72941ecb/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-9vvq2_kube-system_eec193c3-c96f-43e8-a0c3-e0964c0c7b51_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1e78cd75160e76cccdaa932000b542af1336bcbfb8f705e538c5b0ae4b55d0e1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1e78cd75160e76cccdaa932000b542af1336bcbfb8f705e538c5b0ae4b55d0e1","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-9vvq2_kube-system_eec193c3-c96f-43e8-a0c3-e0964c0c7b51_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagatio
n\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/eec193c3-c96f-43e8-a0c3-e0964c0c7b51/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/eec193c3-c96f-43e8-a0c3-e0964c0c7b51/containers/kube-proxy/5ef4ed5d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/eec193c3-c96f-43e8-a0c3-e0964c0c7b51/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/eec193c3-c96f-43e8-a0c3-e0964c0c7b51/volumes/kubernetes.io~projected/kube-api-access-4pkcp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-9vvq2","io.kubernetes.pod.namespace":"kube-system","io.kub
ernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"eec193c3-c96f-43e8-a0c3-e0964c0c7b51","kubernetes.io/config.seen":"2023-10-06T02:51:59.595389474Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f12f4e73c2c0690b411f1285a9c8db67b9f0cd21fb0c9c14e89c87dacc4493c","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/2f12f4e73c2c0690b411f1285a9c8db67b9f0cd21fb0c9c14e89c87dacc4493c/userdata","rootfs":"/var/lib/containers/storage/overlay/780967dcfaced0b3bf5a48d0077dd9fcd866da12e8ad81b7a4fb9e58cf19cbaa/merged","created":"2023-10-06T02:52:44.652602213Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c6d689cd","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c6d689cd\",\"io.kuber
netes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2f12f4e73c2c0690b411f1285a9c8db67b9f0cd21fb0c9c14e89c87dacc4493c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-06T02:52:44.257768236Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-647181\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"25e8ae1e6d6e0a0678f46d58d0b6d5df\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-647181_25e8ae1e6d6e0a067
8f46d58d0b6d5df/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/780967dcfaced0b3bf5a48d0077dd9fcd866da12e8ad81b7a4fb9e58cf19cbaa/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-647181_kube-system_25e8ae1e6d6e0a0678f46d58d0b6d5df_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b6c9c6e9b443f7831a25fad5839c88a5d8a327c19989c2820a8b96cd2f1fd821/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b6c9c6e9b443f7831a25fad5839c88a5d8a327c19989c2820a8b96cd2f1fd821","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-647181_kube-system_25e8ae1e6d6e0a0678f46d58d0b6d5df_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/25e8ae1e6d6e0a0678f46d58d0b6d5df/etc-hosts\",\"readonly\
":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/25e8ae1e6d6e0a0678f46d58d0b6d5df/containers/etcd/63f34248\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-647181","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"25e8ae1e6d6e0a0678f46d58d0b6d5df","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"25e8ae1e6d6e0a0678f46d58d0b6d5df","kubernetes.io/config.seen":"2023-10-06T02:51:36.802494110Z","kubernetes.io/config.source":"file"},"owner":"
root"},{"ociVersion":"1.0.2-dev","id":"37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa/userdata","rootfs":"/var/lib/containers/storage/overlay/e41f844fbe60dbd422f5b00f4c4953bbcdbf2f3190b0fc1a457d7eff7924b551/merged","created":"2023-10-06T02:52:44.67144928Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef1c7172","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef1c7172\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminatio
nGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-06T02:52:44.262979732Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-5zz7b\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"006acf6b-70f8-4596-8965-0f13beb4fff6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-5zz7b_006acf6b-70f8-4596-8965-0f13beb4fff6/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e4
1f844fbe60dbd422f5b00f4c4953bbcdbf2f3190b0fc1a457d7eff7924b551/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-5zz7b_kube-system_006acf6b-70f8-4596-8965-0f13beb4fff6_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/6a34392daaf3058623479fc0d80145d7832dfe21eb0d18f7962afab42660bf22/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6a34392daaf3058623479fc0d80145d7832dfe21eb0d18f7962afab42660bf22","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-5zz7b_kube-system_006acf6b-70f8-4596-8965-0f13beb4fff6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_pat
h\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/006acf6b-70f8-4596-8965-0f13beb4fff6/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/006acf6b-70f8-4596-8965-0f13beb4fff6/containers/kindnet-cni/1d0185a5\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/006acf6b-70f8-4596-8965-0f13beb4fff6/volumes/kubernetes.io~projected/kube-api-access-69jhz\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-5zz7b","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"006acf6b-70f8-4596-8965-0f13beb4fff6","kubernetes.io/config.seen":"2023-10-0
6T02:51:59.614608813Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"93d60293b1fb4af99d9aabce1118e6f119d74b926a27ed829df130a84e190980","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/93d60293b1fb4af99d9aabce1118e6f119d74b926a27ed829df130a84e190980/userdata","rootfs":"/var/lib/containers/storage/overlay/fee28ddfc1577d2f0e8a62134a51671cf13cb184f28e02ee425746a945b156e1/merged","created":"2023-10-06T02:52:44.416911538Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3f60172","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3f60172\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.conta
iner.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"93d60293b1fb4af99d9aabce1118e6f119d74b926a27ed829df130a84e190980","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-06T02:52:43.955575822Z","io.kubernetes.cri-o.Image":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.2","io.kubernetes.cri-o.ImageRef":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-647181\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e27551db2590924cc574ddd8154eb927\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-647181_e27551db2590924cc574ddd8154eb927/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserv
er\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fee28ddfc1577d2f0e8a62134a51671cf13cb184f28e02ee425746a945b156e1/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-647181_kube-system_e27551db2590924cc574ddd8154eb927_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2d20e802177f3bfed8565006c2b578e67e134c7e84ab6e5597f8065d0d66bc67/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2d20e802177f3bfed8565006c2b578e67e134c7e84ab6e5597f8065d0d66bc67","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-647181_kube-system_e27551db2590924cc574ddd8154eb927_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e27551db2590924cc574ddd8154eb927/containers/kube-apiserver/813b8569\",\"readonly\":false,\
"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e27551db2590924cc574ddd8154eb927/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-
apiserver-pause-647181","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e27551db2590924cc574ddd8154eb927","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e27551db2590924cc574ddd8154eb927","kubernetes.io/config.seen":"2023-10-06T02:51:36.802495628Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"af8a4b2970a3b84613bcbbf6aa06501a55e3edeaf7c0178210567318f9ef5c33","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/af8a4b2970a3b84613bcbbf6aa06501a55e3edeaf7c0178210567318f9ef5c33/userdata","rootfs":"/var/lib/containers/storage/overlay/41c6d73b2822c47ea8e51d142d84ed675f2a49b442b4966aa55411fe9a1867d0/merged","created":"2023-10-06T02:52:44.484558713Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9aca220a","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name
\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9aca220a\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"af8a4b2970a3b8
4613bcbbf6aa06501a55e3edeaf7c0178210567318f9ef5c33","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-06T02:52:44.126607556Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-qvjlc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5582861e-0771-4be9-85fa-3194c946e4bc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-qvjlc_5582861e-0771-4be9-85fa-3194c946e4bc/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/41c6d73b2822c47ea8e51d142d84ed6
75f2a49b442b4966aa55411fe9a1867d0/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-qvjlc_kube-system_5582861e-0771-4be9-85fa-3194c946e4bc_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b50c32f244e573157908df70194d395473663708f3c47b6310cbdbf9bee5ff91/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b50c32f244e573157908df70194d395473663708f3c47b6310cbdbf9bee5ff91","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-qvjlc_kube-system_5582861e-0771-4be9-85fa-3194c946e4bc_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/5582861e-0771-4be9-85fa-3194c946e4bc/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5582
861e-0771-4be9-85fa-3194c946e4bc/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5582861e-0771-4be9-85fa-3194c946e4bc/containers/coredns/a9dfe532\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5582861e-0771-4be9-85fa-3194c946e4bc/volumes/kubernetes.io~projected/kube-api-access-gdwls\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-qvjlc","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5582861e-0771-4be9-85fa-3194c946e4bc","kubernetes.io/config.seen":"2023-10-06T02:52:31.300982179Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b5235f6fbd983d9731de29a51cc1cc5b32fa9570cca0eeb69751456dc516d48a","pid":0,"status"
:"stopped","bundle":"/run/containers/storage/overlay-containers/b5235f6fbd983d9731de29a51cc1cc5b32fa9570cca0eeb69751456dc516d48a/userdata","rootfs":"/var/lib/containers/storage/overlay/dc90b54aa75ebfaeae270e5ff9e0d50ff5b2611a63e26effc3979bfb3161a722/merged","created":"2023-10-06T02:52:44.628911071Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1dae5448","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1dae5448\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b5235f6fbd983d9731de29a51cc1cc5b32fa9570cca0eeb69
751456dc516d48a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-06T02:52:44.034674462Z","io.kubernetes.cri-o.Image":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.2","io.kubernetes.cri-o.ImageRef":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-647181\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a182055df2c2d8538eb3c318ae4de8d0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-647181_a182055df2c2d8538eb3c318ae4de8d0/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dc90b54aa75ebfaeae270e5ff9e0d50ff5
b2611a63e26effc3979bfb3161a722/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-647181_kube-system_a182055df2c2d8538eb3c318ae4de8d0_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/92e0672bb5ea78370ced72289848c886508ab0f7eb43fd6c7a1887a3e6800a61/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"92e0672bb5ea78370ced72289848c886508ab0f7eb43fd6c7a1887a3e6800a61","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-647181_kube-system_a182055df2c2d8538eb3c318ae4de8d0_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a182055df2c2d8538eb3c318ae4de8d0/containe
rs/kube-controller-manager/6dac669b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a182055df2c2d8538eb3c318ae4de8d0/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",
\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-647181","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a182055df2c2d8538eb3c318ae4de8d0","kubernetes.io/config.hash":"a182055df2c2d8538eb3c318ae4de8d0","kubernetes.io/config.seen":"2023-10-06T02:51:36.802486930Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b5847f3e0cfe5982e77de2ff5d2a07618e29d8557f0dabd89b493f9fe0d7f735","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b5847f3e0cfe5982e77de2ff5d2a07618e29d8557f0dabd89b493f9fe0d7f735/userdata","rootfs":"/var/lib/containers/storage/overlay/c4adb657c45234fbb26814cab209c0684d1ea5c05d49d3eee84e2e53f
05027de/merged","created":"2023-10-06T02:52:44.495471399Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"66541c94","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"66541c94\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b5847f3e0cfe5982e77de2ff5d2a07618e29d8557f0dabd89b493f9fe0d7f735","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-06T02:52:43.99268907Z","io.kubernetes.cri-o.Image":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","io.kubernetes.cri-o.ImageNam
e":"registry.k8s.io/kube-scheduler:v1.28.2","io.kubernetes.cri-o.ImageRef":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-647181\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5aa93b08a5ba986e2969cb87bd1470c7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-647181_5aa93b08a5ba986e2969cb87bd1470c7/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c4adb657c45234fbb26814cab209c0684d1ea5c05d49d3eee84e2e53f05027de/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-647181_kube-system_5aa93b08a5ba986e2969cb87bd1470c7_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/32a3093f8b26d189452e3ee71f6842974dfa151cb0218557224a602fbb2bbfc4/us
erdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"32a3093f8b26d189452e3ee71f6842974dfa151cb0218557224a602fbb2bbfc4","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-647181_kube-system_5aa93b08a5ba986e2969cb87bd1470c7_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5aa93b08a5ba986e2969cb87bd1470c7/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5aa93b08a5ba986e2969cb87bd1470c7/containers/kube-scheduler/76ec99b5\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-sched
uler-pause-647181","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5aa93b08a5ba986e2969cb87bd1470c7","kubernetes.io/config.hash":"5aa93b08a5ba986e2969cb87bd1470c7","kubernetes.io/config.seen":"2023-10-06T02:51:36.802492682Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I1006 02:52:47.135598 2399191 cri.go:126] list returned 7 containers
	I1006 02:52:47.135622 2399191 cri.go:129] container: {ID:25c027ea18e008721a821f76d629a473c19fdb4f16572e2a21450edbeabf3dbb Status:stopped}
	I1006 02:52:47.135648 2399191 cri.go:135] skipping {25c027ea18e008721a821f76d629a473c19fdb4f16572e2a21450edbeabf3dbb stopped}: state = "stopped", want "paused"
	I1006 02:52:47.135671 2399191 cri.go:129] container: {ID:2f12f4e73c2c0690b411f1285a9c8db67b9f0cd21fb0c9c14e89c87dacc4493c Status:stopped}
	I1006 02:52:47.135684 2399191 cri.go:135] skipping {2f12f4e73c2c0690b411f1285a9c8db67b9f0cd21fb0c9c14e89c87dacc4493c stopped}: state = "stopped", want "paused"
	I1006 02:52:47.135694 2399191 cri.go:129] container: {ID:37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa Status:stopped}
	I1006 02:52:47.135706 2399191 cri.go:135] skipping {37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa stopped}: state = "stopped", want "paused"
	I1006 02:52:47.135716 2399191 cri.go:129] container: {ID:93d60293b1fb4af99d9aabce1118e6f119d74b926a27ed829df130a84e190980 Status:stopped}
	I1006 02:52:47.135724 2399191 cri.go:135] skipping {93d60293b1fb4af99d9aabce1118e6f119d74b926a27ed829df130a84e190980 stopped}: state = "stopped", want "paused"
	I1006 02:52:47.135730 2399191 cri.go:129] container: {ID:af8a4b2970a3b84613bcbbf6aa06501a55e3edeaf7c0178210567318f9ef5c33 Status:stopped}
	I1006 02:52:47.135740 2399191 cri.go:135] skipping {af8a4b2970a3b84613bcbbf6aa06501a55e3edeaf7c0178210567318f9ef5c33 stopped}: state = "stopped", want "paused"
	I1006 02:52:47.135752 2399191 cri.go:129] container: {ID:b5235f6fbd983d9731de29a51cc1cc5b32fa9570cca0eeb69751456dc516d48a Status:stopped}
	I1006 02:52:47.135761 2399191 cri.go:135] skipping {b5235f6fbd983d9731de29a51cc1cc5b32fa9570cca0eeb69751456dc516d48a stopped}: state = "stopped", want "paused"
	I1006 02:52:47.135776 2399191 cri.go:129] container: {ID:b5847f3e0cfe5982e77de2ff5d2a07618e29d8557f0dabd89b493f9fe0d7f735 Status:stopped}
	I1006 02:52:47.135787 2399191 cri.go:135] skipping {b5847f3e0cfe5982e77de2ff5d2a07618e29d8557f0dabd89b493f9fe0d7f735 stopped}: state = "stopped", want "paused"
	I1006 02:52:47.135874 2399191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 02:52:47.152208 2399191 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1006 02:52:47.152229 2399191 kubeadm.go:636] restartCluster start
	I1006 02:52:47.152296 2399191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 02:52:47.171776 2399191 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:47.172485 2399191 kubeconfig.go:92] found "pause-647181" server: "https://192.168.67.2:8443"
	I1006 02:52:47.173345 2399191 kapi.go:59] client config for pause-647181: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.crt", KeyFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.key", CAFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a24a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 02:52:47.174393 2399191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 02:52:47.186188 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:47.186286 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:47.199201 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:47.199228 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:47.199301 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:47.211294 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:47.712535 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:47.712603 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:47.735570 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:48.212030 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:48.212114 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:48.224288 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:48.711944 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:48.712011 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:48.724470 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:49.212097 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:49.212189 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:49.225657 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:49.714262 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:49.714364 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:49.726519 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:50.212066 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:50.212149 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:50.224853 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:50.711587 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:50.711670 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:50.724390 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:51.211753 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:51.211848 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:51.224856 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:51.712180 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:51.712261 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:51.725724 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:52.212135 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:52.212223 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:52.225983 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:52.711436 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:52.711529 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:52.724795 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:53.212312 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:53.212391 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:53.226614 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:53.712172 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:53.712267 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:53.725372 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:54.211417 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:54.211509 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:54.224268 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:54.711413 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:54.711508 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:54.724450 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:55.211973 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:55.212055 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:55.225370 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:55.711952 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:55.712037 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1006 02:52:55.724480 2399191 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:52:56.211650 2399191 api_server.go:166] Checking apiserver status ...
	I1006 02:52:56.211733 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:52:56.247697 2399191 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2978/cgroup
	I1006 02:52:56.269779 2399191 api_server.go:182] apiserver freezer: "3:freezer:/docker/d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82/crio/crio-4e835c4e54a761cd84f42ccbf369d2132f54b37768f4620253facaa380cca10e"
	I1006 02:52:56.269843 2399191 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82/crio/crio-4e835c4e54a761cd84f42ccbf369d2132f54b37768f4620253facaa380cca10e/freezer.state
	I1006 02:52:56.295200 2399191 api_server.go:204] freezer state: "THAWED"
	I1006 02:52:56.295233 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:01.296253 2399191 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1006 02:53:01.296302 2399191 retry.go:31] will retry after 204.542179ms: state is "Stopped"
	I1006 02:53:01.501730 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:06.502625 2399191 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1006 02:53:06.502672 2399191 retry.go:31] will retry after 318.929281ms: state is "Stopped"
	I1006 02:53:06.822305 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:11.823309 2399191 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1006 02:53:11.823361 2399191 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1006 02:53:11.823380 2399191 kubeadm.go:1128] stopping kube-system containers ...
	I1006 02:53:11.823389 2399191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1006 02:53:11.823467 2399191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 02:53:11.937393 2399191 cri.go:89] found id: "0c7033aa5193617c5b60e134c5208cc49f7bbfe902e6cb898289396e7e099d97"
	I1006 02:53:11.937411 2399191 cri.go:89] found id: "1f627028913f83193d21e2ba9b23de5addd0af3f499f32d0a5491aa797f9b938"
	I1006 02:53:11.937417 2399191 cri.go:89] found id: "5481ab047c1f37fc36bbac6bffa2691ed4776094a359b4a13151405f76d97cb7"
	I1006 02:53:11.937422 2399191 cri.go:89] found id: "614ae4c2dae8d574f26260b3a7eafa44d81179dfec617cf466b75daffacc2b9a"
	I1006 02:53:11.937426 2399191 cri.go:89] found id: "1f7d2ed33dd3634ef3dd4898e89ea5b1a30b16dde68873ab0ec0afdcba6d3efd"
	I1006 02:53:11.937431 2399191 cri.go:89] found id: "0cf5928c77815df3b44ae491c77e3730749b2dc1ec4b9c6754f76e504722f1c4"
	I1006 02:53:11.937435 2399191 cri.go:89] found id: "4e835c4e54a761cd84f42ccbf369d2132f54b37768f4620253facaa380cca10e"
	I1006 02:53:11.937439 2399191 cri.go:89] found id: "37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa"
	I1006 02:53:11.937443 2399191 cri.go:89] found id: "2f12f4e73c2c0690b411f1285a9c8db67b9f0cd21fb0c9c14e89c87dacc4493c"
	I1006 02:53:11.937450 2399191 cri.go:89] found id: "af8a4b2970a3b84613bcbbf6aa06501a55e3edeaf7c0178210567318f9ef5c33"
	I1006 02:53:11.937455 2399191 cri.go:89] found id: "25c027ea18e008721a821f76d629a473c19fdb4f16572e2a21450edbeabf3dbb"
	I1006 02:53:11.937459 2399191 cri.go:89] found id: "b5235f6fbd983d9731de29a51cc1cc5b32fa9570cca0eeb69751456dc516d48a"
	I1006 02:53:11.937463 2399191 cri.go:89] found id: "b5847f3e0cfe5982e77de2ff5d2a07618e29d8557f0dabd89b493f9fe0d7f735"
	I1006 02:53:11.937469 2399191 cri.go:89] found id: "93d60293b1fb4af99d9aabce1118e6f119d74b926a27ed829df130a84e190980"
	I1006 02:53:11.937473 2399191 cri.go:89] found id: ""
	I1006 02:53:11.937478 2399191 cri.go:234] Stopping containers: [0c7033aa5193617c5b60e134c5208cc49f7bbfe902e6cb898289396e7e099d97 1f627028913f83193d21e2ba9b23de5addd0af3f499f32d0a5491aa797f9b938 5481ab047c1f37fc36bbac6bffa2691ed4776094a359b4a13151405f76d97cb7 614ae4c2dae8d574f26260b3a7eafa44d81179dfec617cf466b75daffacc2b9a 1f7d2ed33dd3634ef3dd4898e89ea5b1a30b16dde68873ab0ec0afdcba6d3efd 0cf5928c77815df3b44ae491c77e3730749b2dc1ec4b9c6754f76e504722f1c4 4e835c4e54a761cd84f42ccbf369d2132f54b37768f4620253facaa380cca10e 37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa 2f12f4e73c2c0690b411f1285a9c8db67b9f0cd21fb0c9c14e89c87dacc4493c af8a4b2970a3b84613bcbbf6aa06501a55e3edeaf7c0178210567318f9ef5c33 25c027ea18e008721a821f76d629a473c19fdb4f16572e2a21450edbeabf3dbb b5235f6fbd983d9731de29a51cc1cc5b32fa9570cca0eeb69751456dc516d48a b5847f3e0cfe5982e77de2ff5d2a07618e29d8557f0dabd89b493f9fe0d7f735 93d60293b1fb4af99d9aabce1118e6f119d74b926a27ed829df130a84e190980]
	I1006 02:53:11.937537 2399191 ssh_runner.go:195] Run: which crictl
	I1006 02:53:11.948517 2399191 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 0c7033aa5193617c5b60e134c5208cc49f7bbfe902e6cb898289396e7e099d97 1f627028913f83193d21e2ba9b23de5addd0af3f499f32d0a5491aa797f9b938 5481ab047c1f37fc36bbac6bffa2691ed4776094a359b4a13151405f76d97cb7 614ae4c2dae8d574f26260b3a7eafa44d81179dfec617cf466b75daffacc2b9a 1f7d2ed33dd3634ef3dd4898e89ea5b1a30b16dde68873ab0ec0afdcba6d3efd 0cf5928c77815df3b44ae491c77e3730749b2dc1ec4b9c6754f76e504722f1c4 4e835c4e54a761cd84f42ccbf369d2132f54b37768f4620253facaa380cca10e 37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa 2f12f4e73c2c0690b411f1285a9c8db67b9f0cd21fb0c9c14e89c87dacc4493c af8a4b2970a3b84613bcbbf6aa06501a55e3edeaf7c0178210567318f9ef5c33 25c027ea18e008721a821f76d629a473c19fdb4f16572e2a21450edbeabf3dbb b5235f6fbd983d9731de29a51cc1cc5b32fa9570cca0eeb69751456dc516d48a b5847f3e0cfe5982e77de2ff5d2a07618e29d8557f0dabd89b493f9fe0d7f735 93d60293b1fb4af99d9aabce1118e6f119d74b926a27ed829df130a84e190980
	I1006 02:53:32.340567 2399191 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 0c7033aa5193617c5b60e134c5208cc49f7bbfe902e6cb898289396e7e099d97 1f627028913f83193d21e2ba9b23de5addd0af3f499f32d0a5491aa797f9b938 5481ab047c1f37fc36bbac6bffa2691ed4776094a359b4a13151405f76d97cb7 614ae4c2dae8d574f26260b3a7eafa44d81179dfec617cf466b75daffacc2b9a 1f7d2ed33dd3634ef3dd4898e89ea5b1a30b16dde68873ab0ec0afdcba6d3efd 0cf5928c77815df3b44ae491c77e3730749b2dc1ec4b9c6754f76e504722f1c4 4e835c4e54a761cd84f42ccbf369d2132f54b37768f4620253facaa380cca10e 37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa 2f12f4e73c2c0690b411f1285a9c8db67b9f0cd21fb0c9c14e89c87dacc4493c af8a4b2970a3b84613bcbbf6aa06501a55e3edeaf7c0178210567318f9ef5c33 25c027ea18e008721a821f76d629a473c19fdb4f16572e2a21450edbeabf3dbb b5235f6fbd983d9731de29a51cc1cc5b32fa9570cca0eeb69751456dc516d48a b5847f3e0cfe5982e77de2ff5d2a07618e29d8557f0dabd89b493f9fe0d7f735 93d60293b1fb4af99d9aabce1118e6f119d74b926a27ed829df130a84e190980: (20.3
92012216s)
	W1006 02:53:32.340632 2399191 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 0c7033aa5193617c5b60e134c5208cc49f7bbfe902e6cb898289396e7e099d97 1f627028913f83193d21e2ba9b23de5addd0af3f499f32d0a5491aa797f9b938 5481ab047c1f37fc36bbac6bffa2691ed4776094a359b4a13151405f76d97cb7 614ae4c2dae8d574f26260b3a7eafa44d81179dfec617cf466b75daffacc2b9a 1f7d2ed33dd3634ef3dd4898e89ea5b1a30b16dde68873ab0ec0afdcba6d3efd 0cf5928c77815df3b44ae491c77e3730749b2dc1ec4b9c6754f76e504722f1c4 4e835c4e54a761cd84f42ccbf369d2132f54b37768f4620253facaa380cca10e 37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa 2f12f4e73c2c0690b411f1285a9c8db67b9f0cd21fb0c9c14e89c87dacc4493c af8a4b2970a3b84613bcbbf6aa06501a55e3edeaf7c0178210567318f9ef5c33 25c027ea18e008721a821f76d629a473c19fdb4f16572e2a21450edbeabf3dbb b5235f6fbd983d9731de29a51cc1cc5b32fa9570cca0eeb69751456dc516d48a b5847f3e0cfe5982e77de2ff5d2a07618e29d8557f0dabd89b493f9fe0d7f735 93d602
93b1fb4af99d9aabce1118e6f119d74b926a27ed829df130a84e190980: Process exited with status 1
	stdout:
	0c7033aa5193617c5b60e134c5208cc49f7bbfe902e6cb898289396e7e099d97
	1f627028913f83193d21e2ba9b23de5addd0af3f499f32d0a5491aa797f9b938
	5481ab047c1f37fc36bbac6bffa2691ed4776094a359b4a13151405f76d97cb7
	614ae4c2dae8d574f26260b3a7eafa44d81179dfec617cf466b75daffacc2b9a
	1f7d2ed33dd3634ef3dd4898e89ea5b1a30b16dde68873ab0ec0afdcba6d3efd
	0cf5928c77815df3b44ae491c77e3730749b2dc1ec4b9c6754f76e504722f1c4
	4e835c4e54a761cd84f42ccbf369d2132f54b37768f4620253facaa380cca10e
	
	stderr:
	E1006 02:53:32.336955    3292 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa\": container with ID starting with 37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa not found: ID does not exist" containerID="37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa"
	time="2023-10-06T02:53:32Z" level=fatal msg="stopping the container \"37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa\": rpc error: code = NotFound desc = could not find container \"37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa\": container with ID starting with 37265c12bb0427b7a9786effa97e18ddaf2281dea4513db1b00771d9d08717fa not found: ID does not exist"
	I1006 02:53:32.340693 2399191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1006 02:53:32.440383 2399191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 02:53:32.452160 2399191 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Oct  6 02:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct  6 02:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Oct  6 02:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Oct  6 02:51 /etc/kubernetes/scheduler.conf
	
	I1006 02:53:32.452247 2399191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 02:53:32.469938 2399191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 02:53:32.491531 2399191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 02:53:32.511150 2399191 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:53:32.511220 2399191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 02:53:32.527505 2399191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 02:53:32.541331 2399191 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 02:53:32.541400 2399191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 02:53:32.553960 2399191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 02:53:32.569236 2399191 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1006 02:53:32.569258 2399191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 02:53:32.663913 2399191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 02:53:34.416581 2399191 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.752579612s)
	I1006 02:53:34.416616 2399191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1006 02:53:34.668243 2399191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 02:53:34.784441 2399191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1006 02:53:34.967497 2399191 api_server.go:52] waiting for apiserver process to appear ...
	I1006 02:53:34.967565 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:53:35.025265 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:53:35.572795 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:53:36.075690 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:53:36.108214 2399191 api_server.go:72] duration metric: took 1.14071579s to wait for apiserver process to appear ...
	I1006 02:53:36.108245 2399191 api_server.go:88] waiting for apiserver healthz status ...
	I1006 02:53:36.108278 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:36.108547 2399191 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1006 02:53:36.108576 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:36.108738 2399191 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1006 02:53:36.609248 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:41.609651 2399191 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1006 02:53:41.609692 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:41.901879 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:41.901909 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:41.901926 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:41.912718 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:41.912753 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:42.108884 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:42.132565 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:42.132597 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:42.609872 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:42.630319 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:42.630345 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:43.110627 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:43.142092 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:43.142116 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:43.609334 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:43.629903 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:43.629986 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:44.109232 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:44.131027 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1006 02:53:44.152108 2399191 api_server.go:141] control plane version: v1.28.2
	I1006 02:53:44.152146 2399191 api_server.go:131] duration metric: took 8.043892733s to wait for apiserver health ...
	I1006 02:53:44.152157 2399191 cni.go:84] Creating CNI manager for ""
	I1006 02:53:44.152170 2399191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:53:44.154711 2399191 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1006 02:53:44.156991 2399191 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 02:53:44.174968 2399191 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1006 02:53:44.174988 2399191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1006 02:53:44.219173 2399191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 02:53:45.260624 2399191 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.041361101s)
	I1006 02:53:45.260679 2399191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 02:53:45.271930 2399191 system_pods.go:59] 7 kube-system pods found
	I1006 02:53:45.271980 2399191 system_pods.go:61] "coredns-5dd5756b68-qvjlc" [5582861e-0771-4be9-85fa-3194c946e4bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 02:53:45.271990 2399191 system_pods.go:61] "etcd-pause-647181" [2278dbab-5509-4155-ab45-4bbc4a195cea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 02:53:45.271999 2399191 system_pods.go:61] "kindnet-5zz7b" [006acf6b-70f8-4596-8965-0f13beb4fff6] Running
	I1006 02:53:45.272006 2399191 system_pods.go:61] "kube-apiserver-pause-647181" [d0788f9d-f101-42ba-9145-4435f137fa8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 02:53:45.272036 2399191 system_pods.go:61] "kube-controller-manager-pause-647181" [255471e6-b5e4-4d3e-b091-02979c39ab2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 02:53:45.272042 2399191 system_pods.go:61] "kube-proxy-9vvq2" [eec193c3-c96f-43e8-a0c3-e0964c0c7b51] Running
	I1006 02:53:45.272051 2399191 system_pods.go:61] "kube-scheduler-pause-647181" [cb621118-2a4c-4ed4-88ad-f64283986b73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 02:53:45.272058 2399191 system_pods.go:74] duration metric: took 11.353293ms to wait for pod list to return data ...
	I1006 02:53:45.272067 2399191 node_conditions.go:102] verifying NodePressure condition ...
	I1006 02:53:45.275742 2399191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 02:53:45.275780 2399191 node_conditions.go:123] node cpu capacity is 2
	I1006 02:53:45.275792 2399191 node_conditions.go:105] duration metric: took 3.720589ms to run NodePressure ...
	I1006 02:53:45.275815 2399191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 02:53:45.497975 2399191 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1006 02:53:45.506362 2399191 kubeadm.go:787] kubelet initialised
	I1006 02:53:45.506385 2399191 kubeadm.go:788] duration metric: took 8.350181ms waiting for restarted kubelet to initialise ...
	I1006 02:53:45.506395 2399191 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:53:45.512441 2399191 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:47.532776 2399191 pod_ready.go:102] pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace has status "Ready":"False"
	I1006 02:53:49.533221 2399191 pod_ready.go:92] pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:49.533240 2399191 pod_ready.go:81] duration metric: took 4.020762669s waiting for pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:49.533251 2399191 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:49.541162 2399191 pod_ready.go:92] pod "etcd-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:49.541231 2399191 pod_ready.go:81] duration metric: took 7.96315ms waiting for pod "etcd-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:49.541260 2399191 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:51.567383 2399191 pod_ready.go:102] pod "kube-apiserver-pause-647181" in "kube-system" namespace has status "Ready":"False"
	I1006 02:53:52.566289 2399191 pod_ready.go:92] pod "kube-apiserver-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:52.566311 2399191 pod_ready.go:81] duration metric: took 3.025028247s waiting for pod "kube-apiserver-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:52.566323 2399191 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:53.587501 2399191 pod_ready.go:92] pod "kube-controller-manager-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:53.587572 2399191 pod_ready.go:81] duration metric: took 1.021241s waiting for pod "kube-controller-manager-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:53.587613 2399191 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9vvq2" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:53.665492 2399191 pod_ready.go:92] pod "kube-proxy-9vvq2" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:53.665561 2399191 pod_ready.go:81] duration metric: took 77.922111ms waiting for pod "kube-proxy-9vvq2" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:53.665585 2399191 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:54.066851 2399191 pod_ready.go:92] pod "kube-scheduler-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:54.067004 2399191 pod_ready.go:81] duration metric: took 401.387707ms waiting for pod "kube-scheduler-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:54.067031 2399191 pod_ready.go:38] duration metric: took 8.560625129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:53:54.067102 2399191 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 02:53:54.083296 2399191 ops.go:34] apiserver oom_adj: -16
	I1006 02:53:54.083513 2399191 kubeadm.go:640] restartCluster took 1m6.931273862s
	I1006 02:53:54.083541 2399191 kubeadm.go:406] StartCluster complete in 1m7.04546831s
	I1006 02:53:54.083592 2399191 settings.go:142] acquiring lock: {Name:mkbf8759b61c125112e0d07f4c53bb4e84a6de4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:54.083808 2399191 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:53:54.085262 2399191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/kubeconfig: {Name:mkf2ae4867a3638ac11dac5beae6919a4f83b43a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:54.085709 2399191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 02:53:54.086445 2399191 config.go:182] Loaded profile config "pause-647181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:53:54.086587 2399191 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1006 02:53:54.089245 2399191 out.go:177] * Enabled addons: 
	I1006 02:53:54.088072 2399191 kapi.go:59] client config for pause-647181: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.crt", KeyFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.key", CAFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a24a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 02:53:54.092384 2399191 addons.go:502] enable addons completed in 5.7975ms: enabled=[]
	I1006 02:53:54.096110 2399191 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-647181" context rescaled to 1 replicas
	I1006 02:53:54.096165 2399191 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 02:53:54.098257 2399191 out.go:177] * Verifying Kubernetes components...
	I1006 02:53:54.100537 2399191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:53:54.263527 2399191 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1006 02:53:54.263600 2399191 node_ready.go:35] waiting up to 6m0s for node "pause-647181" to be "Ready" ...
	I1006 02:53:54.268610 2399191 node_ready.go:49] node "pause-647181" has status "Ready":"True"
	I1006 02:53:54.268689 2399191 node_ready.go:38] duration metric: took 5.07505ms waiting for node "pause-647181" to be "Ready" ...
	I1006 02:53:54.268715 2399191 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:53:54.475622 2399191 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:54.865846 2399191 pod_ready.go:92] pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:54.865922 2399191 pod_ready.go:81] duration metric: took 390.233621ms waiting for pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:54.865958 2399191 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:55.264593 2399191 pod_ready.go:92] pod "etcd-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:55.264632 2399191 pod_ready.go:81] duration metric: took 398.648295ms waiting for pod "etcd-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:55.264662 2399191 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:55.665133 2399191 pod_ready.go:92] pod "kube-apiserver-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:55.665167 2399191 pod_ready.go:81] duration metric: took 400.490903ms waiting for pod "kube-apiserver-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:55.665180 2399191 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.064620 2399191 pod_ready.go:92] pod "kube-controller-manager-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:56.064693 2399191 pod_ready.go:81] duration metric: took 399.50313ms waiting for pod "kube-controller-manager-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.064710 2399191 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9vvq2" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.468816 2399191 pod_ready.go:92] pod "kube-proxy-9vvq2" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:56.468843 2399191 pod_ready.go:81] duration metric: took 404.124547ms waiting for pod "kube-proxy-9vvq2" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.468866 2399191 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.865980 2399191 pod_ready.go:92] pod "kube-scheduler-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:56.866009 2399191 pod_ready.go:81] duration metric: took 397.135665ms waiting for pod "kube-scheduler-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.866019 2399191 pod_ready.go:38] duration metric: took 2.597262607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:53:56.866034 2399191 api_server.go:52] waiting for apiserver process to appear ...
	I1006 02:53:56.866097 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:53:56.881063 2399191 api_server.go:72] duration metric: took 2.784851851s to wait for apiserver process to appear ...
	I1006 02:53:56.881088 2399191 api_server.go:88] waiting for apiserver healthz status ...
	I1006 02:53:56.881105 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:56.890944 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1006 02:53:56.892803 2399191 api_server.go:141] control plane version: v1.28.2
	I1006 02:53:56.892835 2399191 api_server.go:131] duration metric: took 11.740115ms to wait for apiserver health ...
	I1006 02:53:56.892845 2399191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 02:53:57.069255 2399191 system_pods.go:59] 7 kube-system pods found
	I1006 02:53:57.069337 2399191 system_pods.go:61] "coredns-5dd5756b68-qvjlc" [5582861e-0771-4be9-85fa-3194c946e4bc] Running
	I1006 02:53:57.069359 2399191 system_pods.go:61] "etcd-pause-647181" [2278dbab-5509-4155-ab45-4bbc4a195cea] Running
	I1006 02:53:57.069382 2399191 system_pods.go:61] "kindnet-5zz7b" [006acf6b-70f8-4596-8965-0f13beb4fff6] Running
	I1006 02:53:57.069420 2399191 system_pods.go:61] "kube-apiserver-pause-647181" [d0788f9d-f101-42ba-9145-4435f137fa8b] Running
	I1006 02:53:57.069452 2399191 system_pods.go:61] "kube-controller-manager-pause-647181" [255471e6-b5e4-4d3e-b091-02979c39ab2a] Running
	I1006 02:53:57.069473 2399191 system_pods.go:61] "kube-proxy-9vvq2" [eec193c3-c96f-43e8-a0c3-e0964c0c7b51] Running
	I1006 02:53:57.069511 2399191 system_pods.go:61] "kube-scheduler-pause-647181" [cb621118-2a4c-4ed4-88ad-f64283986b73] Running
	I1006 02:53:57.069539 2399191 system_pods.go:74] duration metric: took 176.68489ms to wait for pod list to return data ...
	I1006 02:53:57.069564 2399191 default_sa.go:34] waiting for default service account to be created ...
	I1006 02:53:57.263979 2399191 default_sa.go:45] found service account: "default"
	I1006 02:53:57.264049 2399191 default_sa.go:55] duration metric: took 194.449112ms for default service account to be created ...
	I1006 02:53:57.264074 2399191 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 02:53:57.469340 2399191 system_pods.go:86] 7 kube-system pods found
	I1006 02:53:57.469414 2399191 system_pods.go:89] "coredns-5dd5756b68-qvjlc" [5582861e-0771-4be9-85fa-3194c946e4bc] Running
	I1006 02:53:57.469453 2399191 system_pods.go:89] "etcd-pause-647181" [2278dbab-5509-4155-ab45-4bbc4a195cea] Running
	I1006 02:53:57.469482 2399191 system_pods.go:89] "kindnet-5zz7b" [006acf6b-70f8-4596-8965-0f13beb4fff6] Running
	I1006 02:53:57.469503 2399191 system_pods.go:89] "kube-apiserver-pause-647181" [d0788f9d-f101-42ba-9145-4435f137fa8b] Running
	I1006 02:53:57.469535 2399191 system_pods.go:89] "kube-controller-manager-pause-647181" [255471e6-b5e4-4d3e-b091-02979c39ab2a] Running
	I1006 02:53:57.469561 2399191 system_pods.go:89] "kube-proxy-9vvq2" [eec193c3-c96f-43e8-a0c3-e0964c0c7b51] Running
	I1006 02:53:57.469580 2399191 system_pods.go:89] "kube-scheduler-pause-647181" [cb621118-2a4c-4ed4-88ad-f64283986b73] Running
	I1006 02:53:57.469615 2399191 system_pods.go:126] duration metric: took 205.522424ms to wait for k8s-apps to be running ...
	I1006 02:53:57.469648 2399191 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 02:53:57.469734 2399191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:53:57.490000 2399191 system_svc.go:56] duration metric: took 20.348879ms WaitForService to wait for kubelet.
	I1006 02:53:57.490074 2399191 kubeadm.go:581] duration metric: took 3.393879604s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1006 02:53:57.490121 2399191 node_conditions.go:102] verifying NodePressure condition ...
	I1006 02:53:57.665418 2399191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 02:53:57.665492 2399191 node_conditions.go:123] node cpu capacity is 2
	I1006 02:53:57.665516 2399191 node_conditions.go:105] duration metric: took 175.362394ms to run NodePressure ...
	I1006 02:53:57.665541 2399191 start.go:228] waiting for startup goroutines ...
	I1006 02:53:57.665575 2399191 start.go:233] waiting for cluster config update ...
	I1006 02:53:57.665601 2399191 start.go:242] writing updated cluster config ...
	I1006 02:53:57.665988 2399191 ssh_runner.go:195] Run: rm -f paused
	I1006 02:53:57.749083 2399191 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1006 02:53:57.752833 2399191 out.go:177] * Done! kubectl is now configured to use "pause-647181" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-647181
helpers_test.go:235: (dbg) docker inspect pause-647181:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82",
	        "Created": "2023-10-06T02:51:15.176736064Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2392407,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-06T02:51:15.609872109Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82/hostname",
	        "HostsPath": "/var/lib/docker/containers/d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82/hosts",
	        "LogPath": "/var/lib/docker/containers/d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82/d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82-json.log",
	        "Name": "/pause-647181",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-647181:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-647181",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/85cfb0b01ca49359273815724ec03173a1977160157825a7df07e9dfab7ef3b8-init/diff:/var/lib/docker/overlay2/ab4f4fc5e8cd2d4bbf1718e21432b9cb0d953b7279be1c1cbb7bd550f03b46dc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85cfb0b01ca49359273815724ec03173a1977160157825a7df07e9dfab7ef3b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85cfb0b01ca49359273815724ec03173a1977160157825a7df07e9dfab7ef3b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85cfb0b01ca49359273815724ec03173a1977160157825a7df07e9dfab7ef3b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-647181",
	                "Source": "/var/lib/docker/volumes/pause-647181/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-647181",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-647181",
	                "name.minikube.sigs.k8s.io": "pause-647181",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9ee03075612006d4af3582c8b04a2cbedfe033ecb46d1e46a79dd409b1fff037",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35455"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35451"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35452"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9ee030756120",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-647181": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d963a8039ef7",
	                        "pause-647181"
	                    ],
	                    "NetworkID": "0626abfe2f4457ad5c980476dc74b843a105e00e3a82a4ae05090fa29c092936",
	                    "EndpointID": "0f2153e88e1c0c0f9e7cf6122748de573c565f656e3f28f5807a95b3a9e51a6d",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-647181 -n pause-647181
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-647181 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-647181 logs -n 25: (2.424431446s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl status docker --all                        |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl cat docker                                 |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /etc/docker/daemon.json                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo docker                         | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | system info                                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl status cri-docker                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | cri-dockerd --version                                |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl status containerd                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl cat containerd                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | containerd config dump                               |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl status crio --all                          |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo find                           | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo crio                           | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | config                                               |                          |         |         |                     |                     |
	| delete  | -p cilium-084205                                     | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC | 06 Oct 23 02:52 UTC |
	| start   | -p force-systemd-env-836004                          | force-systemd-env-836004 | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC | 06 Oct 23 02:53 UTC |
	|         | --memory=2048                                        |                          |         |         |                     |                     |
	|         | --alsologtostderr                                    |                          |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-836004                          | force-systemd-env-836004 | jenkins | v1.31.2 | 06 Oct 23 02:53 UTC | 06 Oct 23 02:53 UTC |
	| start   | -p cert-expiration-885413                            | cert-expiration-885413   | jenkins | v1.31.2 | 06 Oct 23 02:53 UTC |                     |
	|         | --memory=2048                                        |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                          |         |         |                     |                     |
	|         | --driver=docker                                      |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 02:53:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 02:53:35.736231 2405077 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:53:35.736431 2405077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:53:35.736436 2405077 out.go:309] Setting ErrFile to fd 2...
	I1006 02:53:35.736441 2405077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:53:35.736683 2405077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:53:35.737058 2405077 out.go:303] Setting JSON to false
	I1006 02:53:35.743555 2405077 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":45362,"bootTime":1696515454,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:53:35.743633 2405077 start.go:138] virtualization:  
	I1006 02:53:35.747517 2405077 out.go:177] * [cert-expiration-885413] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1006 02:53:35.750397 2405077 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 02:53:35.752351 2405077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:53:35.750495 2405077 notify.go:220] Checking for updates...
	I1006 02:53:35.754766 2405077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:53:35.757067 2405077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:53:35.759203 2405077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 02:53:35.761385 2405077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 02:53:35.763856 2405077 config.go:182] Loaded profile config "pause-647181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:53:35.763946 2405077 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:53:35.825367 2405077 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:53:35.825458 2405077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:53:35.964741 2405077 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-06 02:53:35.953220454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:53:35.964853 2405077 docker.go:295] overlay module found
	I1006 02:53:35.968556 2405077 out.go:177] * Using the docker driver based on user configuration
	I1006 02:53:35.970539 2405077 start.go:298] selected driver: docker
	I1006 02:53:35.970548 2405077 start.go:902] validating driver "docker" against <nil>
	I1006 02:53:35.970560 2405077 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 02:53:35.971199 2405077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:53:36.143119 2405077 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-06 02:53:36.130389669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:53:36.143257 2405077 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1006 02:53:36.143503 2405077 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 02:53:36.146075 2405077 out.go:177] * Using Docker driver with root privileges
	I1006 02:53:36.148191 2405077 cni.go:84] Creating CNI manager for ""
	I1006 02:53:36.148221 2405077 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:53:36.148239 2405077 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 02:53:36.148249 2405077 start_flags.go:323] config:
	{Name:cert-expiration-885413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:cert-expiration-885413 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:53:36.150723 2405077 out.go:177] * Starting control plane node cert-expiration-885413 in cluster cert-expiration-885413
	I1006 02:53:36.153150 2405077 cache.go:122] Beginning downloading kic base image for docker with crio
	I1006 02:53:36.155113 2405077 out.go:177] * Pulling base image ...
	I1006 02:53:36.157001 2405077 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:53:36.157201 2405077 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1006 02:53:36.157716 2405077 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1006 02:53:36.157725 2405077 cache.go:57] Caching tarball of preloaded images
	I1006 02:53:36.157798 2405077 preload.go:174] Found /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 02:53:36.157804 2405077 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1006 02:53:36.157904 2405077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/config.json ...
	I1006 02:53:36.157920 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/config.json: {Name:mk44694d6e937d667332cb1b26aad1b2fc901feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:36.188984 2405077 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1006 02:53:36.189002 2405077 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1006 02:53:36.189018 2405077 cache.go:195] Successfully downloaded all kic artifacts
	I1006 02:53:36.189092 2405077 start.go:365] acquiring machines lock for cert-expiration-885413: {Name:mkfcc9d140c46c3c2d732d336710162d1a815c7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:53:36.189213 2405077 start.go:369] acquired machines lock for "cert-expiration-885413" in 104.009µs
	I1006 02:53:36.189236 2405077 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-885413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:cert-expiration-885413 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 02:53:36.189313 2405077 start.go:125] createHost starting for "" (driver="docker")
	I1006 02:53:36.075690 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:53:36.108214 2399191 api_server.go:72] duration metric: took 1.14071579s to wait for apiserver process to appear ...
	I1006 02:53:36.108245 2399191 api_server.go:88] waiting for apiserver healthz status ...
	I1006 02:53:36.108278 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:36.108547 2399191 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1006 02:53:36.108576 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:36.108738 2399191 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1006 02:53:36.609248 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:36.192894 2405077 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1006 02:53:36.193154 2405077 start.go:159] libmachine.API.Create for "cert-expiration-885413" (driver="docker")
	I1006 02:53:36.193193 2405077 client.go:168] LocalClient.Create starting
	I1006 02:53:36.193259 2405077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem
	I1006 02:53:36.193291 2405077 main.go:141] libmachine: Decoding PEM data...
	I1006 02:53:36.193306 2405077 main.go:141] libmachine: Parsing certificate...
	I1006 02:53:36.193362 2405077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem
	I1006 02:53:36.193378 2405077 main.go:141] libmachine: Decoding PEM data...
	I1006 02:53:36.193390 2405077 main.go:141] libmachine: Parsing certificate...
	I1006 02:53:36.193748 2405077 cli_runner.go:164] Run: docker network inspect cert-expiration-885413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 02:53:36.221930 2405077 cli_runner.go:211] docker network inspect cert-expiration-885413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 02:53:36.222012 2405077 network_create.go:281] running [docker network inspect cert-expiration-885413] to gather additional debugging logs...
	I1006 02:53:36.222032 2405077 cli_runner.go:164] Run: docker network inspect cert-expiration-885413
	W1006 02:53:36.249849 2405077 cli_runner.go:211] docker network inspect cert-expiration-885413 returned with exit code 1
	I1006 02:53:36.249897 2405077 network_create.go:284] error running [docker network inspect cert-expiration-885413]: docker network inspect cert-expiration-885413: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-885413 not found
	I1006 02:53:36.249909 2405077 network_create.go:286] output of [docker network inspect cert-expiration-885413]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-885413 not found
	
	** /stderr **
	I1006 02:53:36.250019 2405077 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:53:36.286688 2405077 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-23fd96ce330f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5d:0d:78:1a} reservation:<nil>}
	I1006 02:53:36.287157 2405077 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8cf15a65a1dd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:06:08:d3:35} reservation:<nil>}
	I1006 02:53:36.287667 2405077 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0626abfe2f44 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:84:59:a7:45} reservation:<nil>}
	I1006 02:53:36.288335 2405077 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002614db0}
	I1006 02:53:36.288371 2405077 network_create.go:124] attempt to create docker network cert-expiration-885413 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1006 02:53:36.288441 2405077 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-885413 cert-expiration-885413
	I1006 02:53:36.386622 2405077 network_create.go:108] docker network cert-expiration-885413 192.168.76.0/24 created
	I1006 02:53:36.386642 2405077 kic.go:118] calculated static IP "192.168.76.2" for the "cert-expiration-885413" container
	I1006 02:53:36.386715 2405077 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 02:53:36.410801 2405077 cli_runner.go:164] Run: docker volume create cert-expiration-885413 --label name.minikube.sigs.k8s.io=cert-expiration-885413 --label created_by.minikube.sigs.k8s.io=true
	I1006 02:53:36.440765 2405077 oci.go:103] Successfully created a docker volume cert-expiration-885413
	I1006 02:53:36.440848 2405077 cli_runner.go:164] Run: docker run --rm --name cert-expiration-885413-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-885413 --entrypoint /usr/bin/test -v cert-expiration-885413:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1006 02:53:37.230016 2405077 oci.go:107] Successfully prepared a docker volume cert-expiration-885413
	I1006 02:53:37.230051 2405077 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:53:37.230068 2405077 kic.go:191] Starting extracting preloaded images to volume ...
	I1006 02:53:37.230157 2405077 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-885413:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 02:53:41.609651 2399191 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1006 02:53:41.609692 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:41.901879 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:41.901909 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:41.901926 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:41.912718 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:41.912753 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:42.108884 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:42.132565 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:42.132597 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:42.609872 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:42.630319 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:42.630345 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:43.110627 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:43.142092 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:43.142116 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:43.609334 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:43.629903 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:43.629986 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:44.109232 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:44.131027 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1006 02:53:44.152108 2399191 api_server.go:141] control plane version: v1.28.2
	I1006 02:53:44.152146 2399191 api_server.go:131] duration metric: took 8.043892733s to wait for apiserver health ...
	I1006 02:53:44.152157 2399191 cni.go:84] Creating CNI manager for ""
	I1006 02:53:44.152170 2399191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:53:44.154711 2399191 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1006 02:53:44.156991 2399191 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 02:53:44.174968 2399191 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1006 02:53:44.174988 2399191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1006 02:53:44.219173 2399191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 02:53:45.260624 2399191 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.041361101s)
	I1006 02:53:45.260679 2399191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 02:53:45.271930 2399191 system_pods.go:59] 7 kube-system pods found
	I1006 02:53:45.271980 2399191 system_pods.go:61] "coredns-5dd5756b68-qvjlc" [5582861e-0771-4be9-85fa-3194c946e4bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 02:53:45.271990 2399191 system_pods.go:61] "etcd-pause-647181" [2278dbab-5509-4155-ab45-4bbc4a195cea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 02:53:45.271999 2399191 system_pods.go:61] "kindnet-5zz7b" [006acf6b-70f8-4596-8965-0f13beb4fff6] Running
	I1006 02:53:45.272006 2399191 system_pods.go:61] "kube-apiserver-pause-647181" [d0788f9d-f101-42ba-9145-4435f137fa8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 02:53:45.272036 2399191 system_pods.go:61] "kube-controller-manager-pause-647181" [255471e6-b5e4-4d3e-b091-02979c39ab2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 02:53:45.272042 2399191 system_pods.go:61] "kube-proxy-9vvq2" [eec193c3-c96f-43e8-a0c3-e0964c0c7b51] Running
	I1006 02:53:45.272051 2399191 system_pods.go:61] "kube-scheduler-pause-647181" [cb621118-2a4c-4ed4-88ad-f64283986b73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 02:53:45.272058 2399191 system_pods.go:74] duration metric: took 11.353293ms to wait for pod list to return data ...
	I1006 02:53:45.272067 2399191 node_conditions.go:102] verifying NodePressure condition ...
	I1006 02:53:45.275742 2399191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 02:53:45.275780 2399191 node_conditions.go:123] node cpu capacity is 2
	I1006 02:53:45.275792 2399191 node_conditions.go:105] duration metric: took 3.720589ms to run NodePressure ...
	I1006 02:53:45.275815 2399191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 02:53:45.497975 2399191 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1006 02:53:45.506362 2399191 kubeadm.go:787] kubelet initialised
	I1006 02:53:45.506385 2399191 kubeadm.go:788] duration metric: took 8.350181ms waiting for restarted kubelet to initialise ...
	I1006 02:53:45.506395 2399191 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:53:45.512441 2399191 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:42.096404 2405077 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-885413:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.866192142s)
	I1006 02:53:42.096423 2405077 kic.go:200] duration metric: took 4.866352 seconds to extract preloaded images to volume
	W1006 02:53:42.096584 2405077 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 02:53:42.096701 2405077 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 02:53:42.264069 2405077 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-885413 --name cert-expiration-885413 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-885413 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-885413 --network cert-expiration-885413 --ip 192.168.76.2 --volume cert-expiration-885413:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1006 02:53:42.837568 2405077 cli_runner.go:164] Run: docker container inspect cert-expiration-885413 --format={{.State.Running}}
	I1006 02:53:42.866448 2405077 cli_runner.go:164] Run: docker container inspect cert-expiration-885413 --format={{.State.Status}}
	I1006 02:53:42.895959 2405077 cli_runner.go:164] Run: docker exec cert-expiration-885413 stat /var/lib/dpkg/alternatives/iptables
	I1006 02:53:43.035500 2405077 oci.go:144] the created container "cert-expiration-885413" has a running status.
	I1006 02:53:43.035518 2405077 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa...
	I1006 02:53:43.473606 2405077 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 02:53:43.510827 2405077 cli_runner.go:164] Run: docker container inspect cert-expiration-885413 --format={{.State.Status}}
	I1006 02:53:43.540123 2405077 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 02:53:43.540135 2405077 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-885413 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 02:53:43.644357 2405077 cli_runner.go:164] Run: docker container inspect cert-expiration-885413 --format={{.State.Status}}
	I1006 02:53:43.686525 2405077 machine.go:88] provisioning docker machine ...
	I1006 02:53:43.686546 2405077 ubuntu.go:169] provisioning hostname "cert-expiration-885413"
	I1006 02:53:43.686608 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:43.719942 2405077 main.go:141] libmachine: Using SSH client type: native
	I1006 02:53:43.720373 2405077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35470 <nil> <nil>}
	I1006 02:53:43.720384 2405077 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-885413 && echo "cert-expiration-885413" | sudo tee /etc/hostname
	I1006 02:53:43.720989 2405077 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42208->127.0.0.1:35470: read: connection reset by peer
	I1006 02:53:46.870925 2405077 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-885413
	
	I1006 02:53:46.871003 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:46.896957 2405077 main.go:141] libmachine: Using SSH client type: native
	I1006 02:53:46.897364 2405077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35470 <nil> <nil>}
	I1006 02:53:46.897380 2405077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-885413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-885413/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-885413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 02:53:47.028614 2405077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 02:53:47.028631 2405077 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17314-2262959/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-2262959/.minikube}
	I1006 02:53:47.028650 2405077 ubuntu.go:177] setting up certificates
	I1006 02:53:47.028658 2405077 provision.go:83] configureAuth start
	I1006 02:53:47.028716 2405077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-885413
	I1006 02:53:47.063445 2405077 provision.go:138] copyHostCerts
	I1006 02:53:47.063519 2405077 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem, removing ...
	I1006 02:53:47.063530 2405077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:53:47.063648 2405077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem (1082 bytes)
	I1006 02:53:47.063762 2405077 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem, removing ...
	I1006 02:53:47.063772 2405077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:53:47.063817 2405077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem (1123 bytes)
	I1006 02:53:47.063913 2405077 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem, removing ...
	I1006 02:53:47.063916 2405077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:53:47.063945 2405077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem (1675 bytes)
	I1006 02:53:47.064012 2405077 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-885413 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube cert-expiration-885413]
	I1006 02:53:48.059635 2405077 provision.go:172] copyRemoteCerts
	I1006 02:53:48.059710 2405077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 02:53:48.059779 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:48.080786 2405077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35470 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa Username:docker}
	I1006 02:53:48.178185 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 02:53:48.207345 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1006 02:53:48.236906 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 02:53:48.268190 2405077 provision.go:86] duration metric: configureAuth took 1.239519759s
	I1006 02:53:48.268206 2405077 ubuntu.go:193] setting minikube options for container-runtime
	I1006 02:53:48.268389 2405077 config.go:182] Loaded profile config "cert-expiration-885413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:53:48.268499 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:48.287707 2405077 main.go:141] libmachine: Using SSH client type: native
	I1006 02:53:48.288125 2405077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35470 <nil> <nil>}
	I1006 02:53:48.288138 2405077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 02:53:48.546423 2405077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 02:53:48.546436 2405077 machine.go:91] provisioned docker machine in 4.859899606s
	I1006 02:53:48.546443 2405077 client.go:171] LocalClient.Create took 12.353246432s
	I1006 02:53:48.546454 2405077 start.go:167] duration metric: libmachine.API.Create for "cert-expiration-885413" took 12.353301875s
	I1006 02:53:48.546461 2405077 start.go:300] post-start starting for "cert-expiration-885413" (driver="docker")
	I1006 02:53:48.546470 2405077 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 02:53:48.546539 2405077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 02:53:48.546577 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:48.565623 2405077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35470 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa Username:docker}
	I1006 02:53:48.662428 2405077 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 02:53:48.666712 2405077 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 02:53:48.666739 2405077 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1006 02:53:48.666752 2405077 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1006 02:53:48.666758 2405077 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1006 02:53:48.666768 2405077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/addons for local assets ...
	I1006 02:53:48.666838 2405077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/files for local assets ...
	I1006 02:53:48.666916 2405077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> 22683062.pem in /etc/ssl/certs
	I1006 02:53:48.667030 2405077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 02:53:48.677676 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:53:48.707706 2405077 start.go:303] post-start completed in 161.230835ms
	I1006 02:53:48.708082 2405077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-885413
	I1006 02:53:48.726129 2405077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/config.json ...
	I1006 02:53:48.726403 2405077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:53:48.726446 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:48.745418 2405077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35470 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa Username:docker}
	I1006 02:53:48.841457 2405077 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 02:53:48.848005 2405077 start.go:128] duration metric: createHost completed in 12.658666251s
	I1006 02:53:48.848023 2405077 start.go:83] releasing machines lock for "cert-expiration-885413", held for 12.658803926s
	I1006 02:53:48.848111 2405077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-885413
	I1006 02:53:48.867474 2405077 ssh_runner.go:195] Run: cat /version.json
	I1006 02:53:48.867514 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:48.867783 2405077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 02:53:48.867858 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:48.888489 2405077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35470 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa Username:docker}
	I1006 02:53:48.910596 2405077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35470 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa Username:docker}
	I1006 02:53:48.983565 2405077 ssh_runner.go:195] Run: systemctl --version
	I1006 02:53:49.124023 2405077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 02:53:49.272586 2405077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 02:53:49.277974 2405077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:53:49.303481 2405077 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1006 02:53:49.303554 2405077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:53:49.345680 2405077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1006 02:53:49.345693 2405077 start.go:472] detecting cgroup driver to use...
	I1006 02:53:49.345725 2405077 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1006 02:53:49.345777 2405077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 02:53:49.364476 2405077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 02:53:49.378446 2405077 docker.go:198] disabling cri-docker service (if available) ...
	I1006 02:53:49.378501 2405077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 02:53:49.397918 2405077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 02:53:49.417318 2405077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 02:53:49.517598 2405077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 02:53:49.639706 2405077 docker.go:214] disabling docker service ...
	I1006 02:53:49.639783 2405077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 02:53:49.664397 2405077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 02:53:49.679019 2405077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 02:53:49.779803 2405077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 02:53:49.897019 2405077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 02:53:49.913676 2405077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 02:53:49.934510 2405077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1006 02:53:49.934568 2405077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:53:49.947817 2405077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 02:53:49.947894 2405077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:53:49.961100 2405077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:53:49.974165 2405077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:53:49.988718 2405077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 02:53:50.014719 2405077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 02:53:50.026707 2405077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 02:53:50.038566 2405077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 02:53:50.147283 2405077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 02:53:50.284060 2405077 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 02:53:50.284123 2405077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 02:53:50.291083 2405077 start.go:540] Will wait 60s for crictl version
	I1006 02:53:50.291150 2405077 ssh_runner.go:195] Run: which crictl
	I1006 02:53:50.296112 2405077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 02:53:50.351375 2405077 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1006 02:53:50.351455 2405077 ssh_runner.go:195] Run: crio --version
	I1006 02:53:50.398051 2405077 ssh_runner.go:195] Run: crio --version
	I1006 02:53:50.453705 2405077 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1006 02:53:47.532776 2399191 pod_ready.go:102] pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace has status "Ready":"False"
	I1006 02:53:49.533221 2399191 pod_ready.go:92] pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:49.533240 2399191 pod_ready.go:81] duration metric: took 4.020762669s waiting for pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:49.533251 2399191 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:49.541162 2399191 pod_ready.go:92] pod "etcd-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:49.541231 2399191 pod_ready.go:81] duration metric: took 7.96315ms waiting for pod "etcd-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:49.541260 2399191 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:50.455709 2405077 cli_runner.go:164] Run: docker network inspect cert-expiration-885413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:53:50.473379 2405077 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1006 02:53:50.478056 2405077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 02:53:50.492957 2405077 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:53:50.493011 2405077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 02:53:50.566160 2405077 crio.go:496] all images are preloaded for cri-o runtime.
	I1006 02:53:50.566171 2405077 crio.go:415] Images already preloaded, skipping extraction
	I1006 02:53:50.566237 2405077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 02:53:50.610992 2405077 crio.go:496] all images are preloaded for cri-o runtime.
	I1006 02:53:50.611003 2405077 cache_images.go:84] Images are preloaded, skipping loading
	I1006 02:53:50.611137 2405077 ssh_runner.go:195] Run: crio config
	I1006 02:53:50.667906 2405077 cni.go:84] Creating CNI manager for ""
	I1006 02:53:50.667917 2405077 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:53:50.667939 2405077 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1006 02:53:50.667957 2405077 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-885413 NodeName:cert-expiration-885413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 02:53:50.668085 2405077 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-885413"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 02:53:50.668160 2405077 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=cert-expiration-885413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:cert-expiration-885413 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1006 02:53:50.668223 2405077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1006 02:53:50.679723 2405077 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 02:53:50.679793 2405077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 02:53:50.690549 2405077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1006 02:53:50.711802 2405077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 02:53:50.733502 2405077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1006 02:53:51.567383 2399191 pod_ready.go:102] pod "kube-apiserver-pause-647181" in "kube-system" namespace has status "Ready":"False"
	I1006 02:53:52.566289 2399191 pod_ready.go:92] pod "kube-apiserver-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:52.566311 2399191 pod_ready.go:81] duration metric: took 3.025028247s waiting for pod "kube-apiserver-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:52.566323 2399191 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:53.587501 2399191 pod_ready.go:92] pod "kube-controller-manager-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:53.587572 2399191 pod_ready.go:81] duration metric: took 1.021241s waiting for pod "kube-controller-manager-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:53.587613 2399191 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9vvq2" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:53.665492 2399191 pod_ready.go:92] pod "kube-proxy-9vvq2" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:53.665561 2399191 pod_ready.go:81] duration metric: took 77.922111ms waiting for pod "kube-proxy-9vvq2" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:53.665585 2399191 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:54.066851 2399191 pod_ready.go:92] pod "kube-scheduler-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:54.067004 2399191 pod_ready.go:81] duration metric: took 401.387707ms waiting for pod "kube-scheduler-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:54.067031 2399191 pod_ready.go:38] duration metric: took 8.560625129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:53:54.067102 2399191 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 02:53:54.083296 2399191 ops.go:34] apiserver oom_adj: -16
	I1006 02:53:54.083513 2399191 kubeadm.go:640] restartCluster took 1m6.931273862s
	I1006 02:53:54.083541 2399191 kubeadm.go:406] StartCluster complete in 1m7.04546831s
	I1006 02:53:54.083592 2399191 settings.go:142] acquiring lock: {Name:mkbf8759b61c125112e0d07f4c53bb4e84a6de4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:54.083808 2399191 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:53:54.085262 2399191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/kubeconfig: {Name:mkf2ae4867a3638ac11dac5beae6919a4f83b43a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:54.085709 2399191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 02:53:54.086445 2399191 config.go:182] Loaded profile config "pause-647181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:53:54.086587 2399191 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1006 02:53:54.089245 2399191 out.go:177] * Enabled addons: 
	I1006 02:53:54.088072 2399191 kapi.go:59] client config for pause-647181: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.crt", KeyFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.key", CAFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a24a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 02:53:54.092384 2399191 addons.go:502] enable addons completed in 5.7975ms: enabled=[]
	I1006 02:53:54.096110 2399191 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-647181" context rescaled to 1 replicas
	I1006 02:53:54.096165 2399191 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 02:53:54.098257 2399191 out.go:177] * Verifying Kubernetes components...
	I1006 02:53:54.100537 2399191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:53:54.263527 2399191 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1006 02:53:54.263600 2399191 node_ready.go:35] waiting up to 6m0s for node "pause-647181" to be "Ready" ...
	I1006 02:53:54.268610 2399191 node_ready.go:49] node "pause-647181" has status "Ready":"True"
	I1006 02:53:54.268689 2399191 node_ready.go:38] duration metric: took 5.07505ms waiting for node "pause-647181" to be "Ready" ...
	I1006 02:53:54.268715 2399191 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:53:54.475622 2399191 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:54.865846 2399191 pod_ready.go:92] pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:54.865922 2399191 pod_ready.go:81] duration metric: took 390.233621ms waiting for pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:54.865958 2399191 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:55.264593 2399191 pod_ready.go:92] pod "etcd-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:55.264632 2399191 pod_ready.go:81] duration metric: took 398.648295ms waiting for pod "etcd-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:55.264662 2399191 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:50.755270 2405077 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1006 02:53:50.759715 2405077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 02:53:50.772847 2405077 certs.go:56] Setting up /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413 for IP: 192.168.76.2
	I1006 02:53:50.772867 2405077 certs.go:190] acquiring lock for shared ca certs: {Name:mk4532656c287f04abff160cc5263fddcb69ac4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:50.772996 2405077 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key
	I1006 02:53:50.773037 2405077 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key
	I1006 02:53:50.773084 2405077 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/client.key
	I1006 02:53:50.773093 2405077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/client.crt with IP's: []
	I1006 02:53:50.991971 2405077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/client.crt ...
	I1006 02:53:50.991992 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/client.crt: {Name:mk611f3886fd953cd3cf4b41020772de97a746bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:50.993488 2405077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/client.key ...
	I1006 02:53:50.993514 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/client.key: {Name:mk6bcb61dbd5d64963346b3ee83acb593d4e2699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:50.993659 2405077 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.key.31bdca25
	I1006 02:53:50.993674 2405077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1006 02:53:51.304114 2405077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.crt.31bdca25 ...
	I1006 02:53:51.304129 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.crt.31bdca25: {Name:mk1bdf1d5890ea05ae9a237ae709bf6659b2149d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:51.304323 2405077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.key.31bdca25 ...
	I1006 02:53:51.304331 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.key.31bdca25: {Name:mk5ed8e145784bed8fef156cd6a5c89a1b49de8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:51.304431 2405077 certs.go:337] copying /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.crt
	I1006 02:53:51.304507 2405077 certs.go:341] copying /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.key
	I1006 02:53:51.304558 2405077 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.key
	I1006 02:53:51.304570 2405077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.crt with IP's: []
	I1006 02:53:51.647885 2405077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.crt ...
	I1006 02:53:51.647898 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.crt: {Name:mk9696d0a1eb19a5be8f41d3614a368900e194d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:51.648089 2405077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.key ...
	I1006 02:53:51.648096 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.key: {Name:mkd834bc83e0c5ff27f9947981c312bd4dc0e865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:51.648903 2405077 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem (1338 bytes)
	W1006 02:53:51.648941 2405077 certs.go:433] ignoring /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306_empty.pem, impossibly tiny 0 bytes
	I1006 02:53:51.648952 2405077 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 02:53:51.648976 2405077 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem (1082 bytes)
	I1006 02:53:51.648998 2405077 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem (1123 bytes)
	I1006 02:53:51.649020 2405077 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem (1675 bytes)
	I1006 02:53:51.649067 2405077 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:53:51.649674 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1006 02:53:51.679192 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 02:53:51.709535 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 02:53:51.738049 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 02:53:51.767065 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 02:53:51.796576 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 02:53:51.826160 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 02:53:51.856619 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 02:53:51.885574 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem --> /usr/share/ca-certificates/2268306.pem (1338 bytes)
	I1006 02:53:51.915919 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /usr/share/ca-certificates/22683062.pem (1708 bytes)
	I1006 02:53:51.945934 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 02:53:51.976110 2405077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 02:53:51.999826 2405077 ssh_runner.go:195] Run: openssl version
	I1006 02:53:52.007590 2405077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 02:53:52.020067 2405077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:53:52.025430 2405077 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  6 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:53:52.025495 2405077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:53:52.037439 2405077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 02:53:52.056447 2405077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2268306.pem && ln -fs /usr/share/ca-certificates/2268306.pem /etc/ssl/certs/2268306.pem"
	I1006 02:53:52.077891 2405077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2268306.pem
	I1006 02:53:52.086035 2405077 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  6 02:19 /usr/share/ca-certificates/2268306.pem
	I1006 02:53:52.086118 2405077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2268306.pem
	I1006 02:53:52.097252 2405077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2268306.pem /etc/ssl/certs/51391683.0"
	I1006 02:53:52.109376 2405077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22683062.pem && ln -fs /usr/share/ca-certificates/22683062.pem /etc/ssl/certs/22683062.pem"
	I1006 02:53:52.121918 2405077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22683062.pem
	I1006 02:53:52.126889 2405077 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  6 02:19 /usr/share/ca-certificates/22683062.pem
	I1006 02:53:52.126945 2405077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22683062.pem
	I1006 02:53:52.136128 2405077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22683062.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 02:53:52.149380 2405077 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1006 02:53:52.154145 2405077 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1006 02:53:52.154199 2405077 kubeadm.go:404] StartCluster: {Name:cert-expiration-885413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:cert-expiration-885413 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:53:52.154281 2405077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 02:53:52.154340 2405077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 02:53:52.197009 2405077 cri.go:89] found id: ""
	I1006 02:53:52.197071 2405077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 02:53:52.208186 2405077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 02:53:52.219206 2405077 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1006 02:53:52.219268 2405077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 02:53:52.230044 2405077 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 02:53:52.230079 2405077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 02:53:52.345311 2405077 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1006 02:53:52.430131 2405077 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 02:53:55.665133 2399191 pod_ready.go:92] pod "kube-apiserver-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:55.665167 2399191 pod_ready.go:81] duration metric: took 400.490903ms waiting for pod "kube-apiserver-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:55.665180 2399191 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.064620 2399191 pod_ready.go:92] pod "kube-controller-manager-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:56.064693 2399191 pod_ready.go:81] duration metric: took 399.50313ms waiting for pod "kube-controller-manager-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.064710 2399191 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9vvq2" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.468816 2399191 pod_ready.go:92] pod "kube-proxy-9vvq2" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:56.468843 2399191 pod_ready.go:81] duration metric: took 404.124547ms waiting for pod "kube-proxy-9vvq2" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.468866 2399191 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.865980 2399191 pod_ready.go:92] pod "kube-scheduler-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:56.866009 2399191 pod_ready.go:81] duration metric: took 397.135665ms waiting for pod "kube-scheduler-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.866019 2399191 pod_ready.go:38] duration metric: took 2.597262607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:53:56.866034 2399191 api_server.go:52] waiting for apiserver process to appear ...
	I1006 02:53:56.866097 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:53:56.881063 2399191 api_server.go:72] duration metric: took 2.784851851s to wait for apiserver process to appear ...
	I1006 02:53:56.881088 2399191 api_server.go:88] waiting for apiserver healthz status ...
	I1006 02:53:56.881105 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:56.890944 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1006 02:53:56.892803 2399191 api_server.go:141] control plane version: v1.28.2
	I1006 02:53:56.892835 2399191 api_server.go:131] duration metric: took 11.740115ms to wait for apiserver health ...
	I1006 02:53:56.892845 2399191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 02:53:57.069255 2399191 system_pods.go:59] 7 kube-system pods found
	I1006 02:53:57.069337 2399191 system_pods.go:61] "coredns-5dd5756b68-qvjlc" [5582861e-0771-4be9-85fa-3194c946e4bc] Running
	I1006 02:53:57.069359 2399191 system_pods.go:61] "etcd-pause-647181" [2278dbab-5509-4155-ab45-4bbc4a195cea] Running
	I1006 02:53:57.069382 2399191 system_pods.go:61] "kindnet-5zz7b" [006acf6b-70f8-4596-8965-0f13beb4fff6] Running
	I1006 02:53:57.069420 2399191 system_pods.go:61] "kube-apiserver-pause-647181" [d0788f9d-f101-42ba-9145-4435f137fa8b] Running
	I1006 02:53:57.069452 2399191 system_pods.go:61] "kube-controller-manager-pause-647181" [255471e6-b5e4-4d3e-b091-02979c39ab2a] Running
	I1006 02:53:57.069473 2399191 system_pods.go:61] "kube-proxy-9vvq2" [eec193c3-c96f-43e8-a0c3-e0964c0c7b51] Running
	I1006 02:53:57.069511 2399191 system_pods.go:61] "kube-scheduler-pause-647181" [cb621118-2a4c-4ed4-88ad-f64283986b73] Running
	I1006 02:53:57.069539 2399191 system_pods.go:74] duration metric: took 176.68489ms to wait for pod list to return data ...
	I1006 02:53:57.069564 2399191 default_sa.go:34] waiting for default service account to be created ...
	I1006 02:53:57.263979 2399191 default_sa.go:45] found service account: "default"
	I1006 02:53:57.264049 2399191 default_sa.go:55] duration metric: took 194.449112ms for default service account to be created ...
	I1006 02:53:57.264074 2399191 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 02:53:57.469340 2399191 system_pods.go:86] 7 kube-system pods found
	I1006 02:53:57.469414 2399191 system_pods.go:89] "coredns-5dd5756b68-qvjlc" [5582861e-0771-4be9-85fa-3194c946e4bc] Running
	I1006 02:53:57.469453 2399191 system_pods.go:89] "etcd-pause-647181" [2278dbab-5509-4155-ab45-4bbc4a195cea] Running
	I1006 02:53:57.469482 2399191 system_pods.go:89] "kindnet-5zz7b" [006acf6b-70f8-4596-8965-0f13beb4fff6] Running
	I1006 02:53:57.469503 2399191 system_pods.go:89] "kube-apiserver-pause-647181" [d0788f9d-f101-42ba-9145-4435f137fa8b] Running
	I1006 02:53:57.469535 2399191 system_pods.go:89] "kube-controller-manager-pause-647181" [255471e6-b5e4-4d3e-b091-02979c39ab2a] Running
	I1006 02:53:57.469561 2399191 system_pods.go:89] "kube-proxy-9vvq2" [eec193c3-c96f-43e8-a0c3-e0964c0c7b51] Running
	I1006 02:53:57.469580 2399191 system_pods.go:89] "kube-scheduler-pause-647181" [cb621118-2a4c-4ed4-88ad-f64283986b73] Running
	I1006 02:53:57.469615 2399191 system_pods.go:126] duration metric: took 205.522424ms to wait for k8s-apps to be running ...
	I1006 02:53:57.469648 2399191 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 02:53:57.469734 2399191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:53:57.490000 2399191 system_svc.go:56] duration metric: took 20.348879ms WaitForService to wait for kubelet.
	I1006 02:53:57.490074 2399191 kubeadm.go:581] duration metric: took 3.393879604s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1006 02:53:57.490121 2399191 node_conditions.go:102] verifying NodePressure condition ...
	I1006 02:53:57.665418 2399191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 02:53:57.665492 2399191 node_conditions.go:123] node cpu capacity is 2
	I1006 02:53:57.665516 2399191 node_conditions.go:105] duration metric: took 175.362394ms to run NodePressure ...
	I1006 02:53:57.665541 2399191 start.go:228] waiting for startup goroutines ...
	I1006 02:53:57.665575 2399191 start.go:233] waiting for cluster config update ...
	I1006 02:53:57.665601 2399191 start.go:242] writing updated cluster config ...
	I1006 02:53:57.665988 2399191 ssh_runner.go:195] Run: rm -f paused
	I1006 02:53:57.749083 2399191 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1006 02:53:57.752833 2399191 out.go:177] * Done! kubectl is now configured to use "pause-647181" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.193233511Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-qvjlc/coredns" id=cb852b53-b0df-4325-a85a-5eaf8054fd98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.193336593Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.240434039Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0371481bdf1e400611c971aca35803110cdca48f03942d685f0477c280f68930/merged/etc/passwd: no such file or directory"
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.240652453Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0371481bdf1e400611c971aca35803110cdca48f03942d685f0477c280f68930/merged/etc/group: no such file or directory"
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.449315186Z" level=info msg="Created container e6461d2a0e977c3051c8138bd4506ebdcbf46d670d3499760ac6180283967197: kube-system/coredns-5dd5756b68-qvjlc/coredns" id=cb852b53-b0df-4325-a85a-5eaf8054fd98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.450582427Z" level=info msg="Starting container: e6461d2a0e977c3051c8138bd4506ebdcbf46d670d3499760ac6180283967197" id=0b837981-6fff-40f0-a2f9-421d76d0313c name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.470893276Z" level=info msg="Started container" PID=3992 containerID=e6461d2a0e977c3051c8138bd4506ebdcbf46d670d3499760ac6180283967197 description=kube-system/coredns-5dd5756b68-qvjlc/coredns id=0b837981-6fff-40f0-a2f9-421d76d0313c name=/runtime.v1.RuntimeService/StartContainer sandboxID=b50c32f244e573157908df70194d395473663708f3c47b6310cbdbf9bee5ff91
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.514936015Z" level=info msg="Created container a75615795b0c938988076ada65f9dde5c43bd58a436d4cdfb9e73713f8b80405: kube-system/kindnet-5zz7b/kindnet-cni" id=4a61d2f7-20ee-41a8-be1c-d2ed92e255df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.516084051Z" level=info msg="Starting container: a75615795b0c938988076ada65f9dde5c43bd58a436d4cdfb9e73713f8b80405" id=fa8beab8-65cb-4469-9f0a-60c36247ac03 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.540162316Z" level=info msg="Started container" PID=3979 containerID=a75615795b0c938988076ada65f9dde5c43bd58a436d4cdfb9e73713f8b80405 description=kube-system/kindnet-5zz7b/kindnet-cni id=fa8beab8-65cb-4469-9f0a-60c36247ac03 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a34392daaf3058623479fc0d80145d7832dfe21eb0d18f7962afab42660bf22
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.627262395Z" level=info msg="Created container 0651a743420f5675b7b2f78756250ac69a5295917655f1ef7455ce8e1c136edd: kube-system/kube-proxy-9vvq2/kube-proxy" id=8f5a4e70-e765-4b80-8eed-17e404e694f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.627988323Z" level=info msg="Starting container: 0651a743420f5675b7b2f78756250ac69a5295917655f1ef7455ce8e1c136edd" id=2450f670-33c8-4f52-b165-64a04e8cff4e name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.672099560Z" level=info msg="Started container" PID=3981 containerID=0651a743420f5675b7b2f78756250ac69a5295917655f1ef7455ce8e1c136edd description=kube-system/kube-proxy-9vvq2/kube-proxy id=2450f670-33c8-4f52-b165-64a04e8cff4e name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e78cd75160e76cccdaa932000b542af1336bcbfb8f705e538c5b0ae4b55d0e1
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.973082483Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.041226584Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.041262392Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.041278482Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.079382262Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.079420039Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.079437541Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.093591262Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.093624387Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.093640256Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.133020179Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.133052048Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e6461d2a0e977       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   17 seconds ago       Running             coredns                   3                   b50c32f244e57       coredns-5dd5756b68-qvjlc
	0651a743420f5       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   17 seconds ago       Running             kube-proxy                3                   1e78cd75160e7       kube-proxy-9vvq2
	a75615795b0c9       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   17 seconds ago       Running             kindnet-cni               3                   6a34392daaf30       kindnet-5zz7b
	49ce28bf9b8d6       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   23 seconds ago       Running             kube-controller-manager   3                   92e0672bb5ea7       kube-controller-manager-pause-647181
	5d6de69c7f93e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   23 seconds ago       Running             etcd                      3                   b6c9c6e9b443f       etcd-pause-647181
	65455e488da1a       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   23 seconds ago       Running             kube-apiserver            3                   2d20e802177f3       kube-apiserver-pause-647181
	d5d42bb4b86d0       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   24 seconds ago       Running             kube-scheduler            3                   32a3093f8b26d       kube-scheduler-pause-647181
	0c7033aa51936       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   51 seconds ago       Exited              kube-scheduler            2                   32a3093f8b26d       kube-scheduler-pause-647181
	1f627028913f8       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   51 seconds ago       Exited              kube-controller-manager   2                   92e0672bb5ea7       kube-controller-manager-pause-647181
	5481ab047c1f3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   51 seconds ago       Exited              coredns                   2                   b50c32f244e57       coredns-5dd5756b68-qvjlc
	614ae4c2dae8d       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   52 seconds ago       Exited              etcd                      2                   b6c9c6e9b443f       etcd-pause-647181
	1f7d2ed33dd36       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   About a minute ago   Exited              kindnet-cni               2                   6a34392daaf30       kindnet-5zz7b
	0cf5928c77815       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   About a minute ago   Exited              kube-proxy                2                   1e78cd75160e7       kube-proxy-9vvq2
	4e835c4e54a76       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   About a minute ago   Exited              kube-apiserver            2                   2d20e802177f3       kube-apiserver-pause-647181
	
	* 
	* ==> coredns [5481ab047c1f37fc36bbac6bffa2691ed4776094a359b4a13151405f76d97cb7] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41771 - 40892 "HINFO IN 7976085451383260252.3804424261778561535. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026451813s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [e6461d2a0e977c3051c8138bd4506ebdcbf46d670d3499760ac6180283967197] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34437 - 15200 "HINFO IN 221119102785819024.72943900817473966. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.01347316s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-647181
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-647181
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=84890cb24d0240d9d992d7c7712ee519ceed4154
	                    minikube.k8s.io/name=pause-647181
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_06T02_51_48_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Oct 2023 02:51:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-647181
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Oct 2023 02:53:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Oct 2023 02:53:42 +0000   Fri, 06 Oct 2023 02:51:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Oct 2023 02:53:42 +0000   Fri, 06 Oct 2023 02:51:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Oct 2023 02:53:42 +0000   Fri, 06 Oct 2023 02:51:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Oct 2023 02:53:42 +0000   Fri, 06 Oct 2023 02:52:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-647181
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4dd99cc55f149138895638849151b21
	  System UUID:                688f5c64-ef7e-47ff-87cf-6335f6e0d45a
	  Boot ID:                    d6810820-8fb1-4098-8489-41f3441712b9
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-qvjlc                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m
	  kube-system                 etcd-pause-647181                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m12s
	  kube-system                 kindnet-5zz7b                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m
	  kube-system                 kube-apiserver-pause-647181             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-controller-manager-pause-647181    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-proxy-9vvq2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-scheduler-pause-647181             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 117s                   kube-proxy       
	  Normal  Starting                 16s                    kube-proxy       
	  Normal  Starting                 43s                    kube-proxy       
	  Normal  Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m22s (x8 over 2m23s)  kubelet          Node pause-647181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m22s (x8 over 2m23s)  kubelet          Node pause-647181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m22s (x8 over 2m23s)  kubelet          Node pause-647181 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s                  kubelet          Node pause-647181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s                  kubelet          Node pause-647181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s                  kubelet          Node pause-647181 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m1s                   node-controller  Node pause-647181 event: Registered Node pause-647181 in Controller
	  Normal  NodeReady                88s                    kubelet          Node pause-647181 status is now: NodeReady
	  Normal  Starting                 25s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 25s)      kubelet          Node pause-647181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 25s)      kubelet          Node pause-647181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x8 over 25s)      kubelet          Node pause-647181 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                     node-controller  Node pause-647181 event: Registered Node pause-647181 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001054] FS-Cache: O-key=[8] '276a3b0000000000'
	[  +0.000712] FS-Cache: N-cookie c=0000009c [p=00000093 fl=2 nc=0 na=1]
	[  +0.000923] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000ea8162ba
	[  +0.001002] FS-Cache: N-key=[8] '276a3b0000000000'
	[  +0.002663] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=00000096 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000920] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000fc3db6f4
	[  +0.000983] FS-Cache: O-key=[8] '276a3b0000000000'
	[  +0.000674] FS-Cache: N-cookie c=0000009d [p=00000093 fl=2 nc=0 na=1]
	[  +0.000946] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000b9bd865e
	[  +0.000999] FS-Cache: N-key=[8] '276a3b0000000000'
	[  +2.732427] FS-Cache: Duplicate cookie detected
	[  +0.000701] FS-Cache: O-cookie c=00000094 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000d93f34d8
	[  +0.000995] FS-Cache: O-key=[8] '266a3b0000000000'
	[  +0.000657] FS-Cache: N-cookie c=0000009f [p=00000093 fl=2 nc=0 na=1]
	[  +0.000872] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=000000000c4a9176
	[  +0.000974] FS-Cache: N-key=[8] '266a3b0000000000'
	[  +0.306196] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=00000099 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000922] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=0000000019ad38e0
	[  +0.001027] FS-Cache: O-key=[8] '2e6a3b0000000000'
	[  +0.000669] FS-Cache: N-cookie c=000000a0 [p=00000093 fl=2 nc=0 na=1]
	[  +0.000879] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000ea8162ba
	[  +0.000981] FS-Cache: N-key=[8] '2e6a3b0000000000'
	
	* 
	* ==> etcd [5d6de69c7f93e007181208271bac8495ce391ad74118e1200bb41d1979af4135] <==
	* {"level":"info","ts":"2023-10-06T02:53:36.418493Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-06T02:53:36.423418Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-06T02:53:36.423225Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-06T02:53:36.424046Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-06T02:53:36.423967Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-06T02:53:38.043117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:38.043227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:38.043269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:38.04333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-10-06T02:53:38.043371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-06T02:53:38.043412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-10-06T02:53:38.043444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-06T02:53:38.047328Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-647181 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-06T02:53:38.047852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-06T02:53:38.048915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-06T02:53:38.055226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-06T02:53:38.055253Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-06T02:53:38.066883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-06T02:53:38.095969Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-06T02:53:41.974846Z","caller":"traceutil/trace.go:171","msg":"trace[1316894804] transaction","detail":"{read_only:false; number_of_response:0; response_revision:471; }","duration":"115.183197ms","start":"2023-10-06T02:53:41.859647Z","end":"2023-10-06T02:53:41.97483Z","steps":["trace[1316894804] 'process raft request'  (duration: 41.61226ms)","trace[1316894804] 'compare'  (duration: 73.497353ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-06T02:53:41.975284Z","caller":"traceutil/trace.go:171","msg":"trace[848629592] linearizableReadLoop","detail":"{readStateIndex:496; appliedIndex:495; }","duration":"115.534723ms","start":"2023-10-06T02:53:41.859739Z","end":"2023-10-06T02:53:41.975274Z","steps":["trace[848629592] 'read index received'  (duration: 41.59763ms)","trace[848629592] 'applied index is now lower than readState.Index'  (duration: 73.936018ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-06T02:53:41.975399Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.659318ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/pause-647181\" ","response":"range_response_count:1 size:676"}
	{"level":"info","ts":"2023-10-06T02:53:41.975456Z","caller":"traceutil/trace.go:171","msg":"trace[2072830908] range","detail":"{range_begin:/registry/csinodes/pause-647181; range_end:; response_count:1; response_revision:471; }","duration":"115.730129ms","start":"2023-10-06T02:53:41.859719Z","end":"2023-10-06T02:53:41.975449Z","steps":["trace[2072830908] 'agreement among raft nodes before linearized reading'  (duration: 115.633414ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-06T02:53:41.978271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.056229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-06T02:53:41.978335Z","caller":"traceutil/trace.go:171","msg":"trace[962203764] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:0; response_revision:472; }","duration":"115.121568ms","start":"2023-10-06T02:53:41.863196Z","end":"2023-10-06T02:53:41.978318Z","steps":["trace[962203764] 'agreement among raft nodes before linearized reading'  (duration: 115.035183ms)"],"step_count":1}
	
	* 
	* ==> etcd [614ae4c2dae8d574f26260b3a7eafa44d81179dfec617cf466b75daffacc2b9a] <==
	* {"level":"info","ts":"2023-10-06T02:53:07.708387Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-06T02:53:08.691983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-06T02:53:08.692706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-06T02:53:08.693263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-10-06T02:53:08.701311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:08.701459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:08.701565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:08.701655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:08.718032Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-647181 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-06T02:53:08.718077Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-06T02:53:08.735438Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-06T02:53:08.736374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-06T02:53:08.745231Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-06T02:53:08.759099Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-06T02:53:08.759133Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-06T02:53:21.753573Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-06T02:53:21.753633Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-647181","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2023-10-06T02:53:21.753708Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-06T02:53:21.753778Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-06T02:53:21.804737Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-06T02:53:21.804881Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-06T02:53:21.804954Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-10-06T02:53:21.807626Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-06T02:53:21.807868Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-06T02:53:21.80791Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-647181","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  02:54:00 up 12:36,  0 users,  load average: 4.90, 4.31, 2.94
	Linux pause-647181 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1f7d2ed33dd3634ef3dd4898e89ea5b1a30b16dde68873ab0ec0afdcba6d3efd] <==
	* I1006 02:52:59.725430       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1006 02:52:59.725490       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1006 02:52:59.725631       1 main.go:116] setting mtu 1500 for CNI 
	I1006 02:52:59.725641       1 main.go:146] kindnetd IP family: "ipv4"
	I1006 02:52:59.725652       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1006 02:53:10.035694       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I1006 02:53:14.807888       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1006 02:53:14.811104       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [a75615795b0c938988076ada65f9dde5c43bd58a436d4cdfb9e73713f8b80405] <==
	* I1006 02:53:42.626310       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1006 02:53:42.626372       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1006 02:53:42.626506       1 main.go:116] setting mtu 1500 for CNI 
	I1006 02:53:42.626519       1 main.go:146] kindnetd IP family: "ipv4"
	I1006 02:53:42.626530       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1006 02:53:42.972868       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1006 02:53:42.972916       1 main.go:227] handling current node
	I1006 02:53:53.040195       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1006 02:53:53.040421       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [4e835c4e54a761cd84f42ccbf369d2132f54b37768f4620253facaa380cca10e] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 02:53:31.947201       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 02:53:31.991893       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 02:53:32.123697       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [65455e488da1a2ee47fc227e175d5f533b87130a3128f5bccecf01d6fce2062b] <==
	* I1006 02:53:41.593386       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1006 02:53:41.593418       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1006 02:53:41.593504       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1006 02:53:41.594673       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1006 02:53:41.594696       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1006 02:53:41.782079       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1006 02:53:41.782165       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1006 02:53:41.798950       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 02:53:41.809398       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 02:53:41.818064       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1006 02:53:41.818119       1 shared_informer.go:318] Caches are synced for configmaps
	I1006 02:53:41.818155       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1006 02:53:41.823551       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1006 02:53:41.823972       1 aggregator.go:166] initial CRD sync complete...
	I1006 02:53:41.823992       1 autoregister_controller.go:141] Starting autoregister controller
	I1006 02:53:41.823998       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 02:53:41.824005       1 cache.go:39] Caches are synced for autoregister controller
	I1006 02:53:41.859209       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1006 02:53:41.984377       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1006 02:53:42.648994       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 02:53:45.249216       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1006 02:53:45.403521       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1006 02:53:45.414858       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1006 02:53:45.478648       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 02:53:45.486365       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [1f627028913f83193d21e2ba9b23de5addd0af3f499f32d0a5491aa797f9b938] <==
	* I1006 02:53:10.540165       1 serving.go:348] Generated self-signed cert in-memory
	I1006 02:53:15.459670       1 controllermanager.go:189] "Starting" version="v1.28.2"
	I1006 02:53:15.459708       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 02:53:15.463947       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I1006 02:53:15.465102       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1006 02:53:15.465215       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1006 02:53:15.503352       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [49ce28bf9b8d6d2e09af0cdf6857f5f6a87dd44d509b54fa9c94848af062a41f] <==
	* I1006 02:53:54.653893       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1006 02:53:54.653942       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1006 02:53:54.653971       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1006 02:53:54.654114       1 shared_informer.go:318] Caches are synced for stateful set
	I1006 02:53:54.657742       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1006 02:53:54.657877       1 shared_informer.go:318] Caches are synced for PV protection
	I1006 02:53:54.659401       1 shared_informer.go:318] Caches are synced for persistent volume
	I1006 02:53:54.666824       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1006 02:53:54.666975       1 shared_informer.go:318] Caches are synced for crt configmap
	I1006 02:53:54.670690       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1006 02:53:54.670835       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1006 02:53:54.670958       1 shared_informer.go:318] Caches are synced for namespace
	I1006 02:53:54.679166       1 shared_informer.go:318] Caches are synced for ephemeral
	I1006 02:53:54.679255       1 shared_informer.go:318] Caches are synced for service account
	I1006 02:53:54.685138       1 shared_informer.go:318] Caches are synced for HPA
	I1006 02:53:54.685239       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1006 02:53:54.687555       1 shared_informer.go:318] Caches are synced for GC
	I1006 02:53:54.694745       1 shared_informer.go:318] Caches are synced for attach detach
	I1006 02:53:54.704189       1 shared_informer.go:318] Caches are synced for disruption
	I1006 02:53:54.717134       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1006 02:53:54.740457       1 shared_informer.go:318] Caches are synced for resource quota
	I1006 02:53:54.746984       1 shared_informer.go:318] Caches are synced for resource quota
	I1006 02:53:55.144570       1 shared_informer.go:318] Caches are synced for garbage collector
	I1006 02:53:55.204396       1 shared_informer.go:318] Caches are synced for garbage collector
	I1006 02:53:55.204435       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [0651a743420f5675b7b2f78756250ac69a5295917655f1ef7455ce8e1c136edd] <==
	* I1006 02:53:42.987863       1 server_others.go:69] "Using iptables proxy"
	I1006 02:53:43.316170       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1006 02:53:43.545915       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 02:53:43.575943       1 server_others.go:152] "Using iptables Proxier"
	I1006 02:53:43.577975       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1006 02:53:43.577997       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1006 02:53:43.578058       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1006 02:53:43.578653       1 server.go:846] "Version info" version="v1.28.2"
	I1006 02:53:43.578670       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 02:53:43.600110       1 config.go:188] "Starting service config controller"
	I1006 02:53:43.607473       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1006 02:53:43.606559       1 config.go:97] "Starting endpoint slice config controller"
	I1006 02:53:43.607506       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1006 02:53:43.607255       1 config.go:315] "Starting node config controller"
	I1006 02:53:43.607513       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1006 02:53:43.711404       1 shared_informer.go:318] Caches are synced for node config
	I1006 02:53:43.711443       1 shared_informer.go:318] Caches are synced for service config
	I1006 02:53:43.711455       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [0cf5928c77815df3b44ae491c77e3730749b2dc1ec4b9c6754f76e504722f1c4] <==
	* I1006 02:52:57.915372       1 server_others.go:69] "Using iptables proxy"
	E1006 02:53:07.919374       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-647181": net/http: TLS handshake timeout
	I1006 02:53:14.811482       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1006 02:53:16.335454       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 02:53:16.338814       1 server_others.go:152] "Using iptables Proxier"
	I1006 02:53:16.338917       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1006 02:53:16.338948       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1006 02:53:16.339024       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1006 02:53:16.339287       1 server.go:846] "Version info" version="v1.28.2"
	I1006 02:53:16.341341       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 02:53:16.342244       1 config.go:188] "Starting service config controller"
	I1006 02:53:16.342371       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1006 02:53:16.342434       1 config.go:97] "Starting endpoint slice config controller"
	I1006 02:53:16.342463       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1006 02:53:16.344806       1 config.go:315] "Starting node config controller"
	I1006 02:53:16.344885       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1006 02:53:16.442805       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1006 02:53:16.442864       1 shared_informer.go:318] Caches are synced for service config
	I1006 02:53:16.445411       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [0c7033aa5193617c5b60e134c5208cc49f7bbfe902e6cb898289396e7e099d97] <==
	* I1006 02:53:12.622341       1 serving.go:348] Generated self-signed cert in-memory
	I1006 02:53:16.323657       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1006 02:53:16.323683       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1006 02:53:16.325109       1 secure_serving.go:111] Initial population of client CA failed: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": context canceled
	I1006 02:53:16.325906       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	E1006 02:53:16.325962       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I1006 02:53:16.326058       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1006 02:53:16.326080       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	E1006 02:53:16.326088       1 shared_informer.go:314] unable to sync caches for RequestHeaderAuthRequestController
	I1006 02:53:16.326093       1 requestheader_controller.go:176] Shutting down RequestHeaderAuthRequestController
	I1006 02:53:16.326105       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1006 02:53:16.326116       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1006 02:53:16.326182       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E1006 02:53:16.326561       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [d5d42bb4b86d0dc14c954c1197facb1798c3a7771ac728c7c1491d2a41a018a3] <==
	* I1006 02:53:41.102376       1 serving.go:348] Generated self-signed cert in-memory
	I1006 02:53:42.671221       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1006 02:53:42.681809       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 02:53:42.724331       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1006 02:53:42.724379       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1006 02:53:42.724444       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 02:53:42.724457       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1006 02:53:42.724479       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 02:53:42.724495       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1006 02:53:42.731657       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1006 02:53:42.731769       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1006 02:53:42.828998       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1006 02:53:42.829830       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1006 02:53:42.829856       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 06 02:53:35 pause-647181 kubelet[3727]: W1006 02:53:35.916130    3727 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 06 02:53:35 pause-647181 kubelet[3727]: E1006 02:53:35.916225    3727 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 06 02:53:35 pause-647181 kubelet[3727]: W1006 02:53:35.919694    3727 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 06 02:53:35 pause-647181 kubelet[3727]: E1006 02:53:35.919757    3727 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 06 02:53:36 pause-647181 kubelet[3727]: W1006 02:53:36.065740    3727 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-647181&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 06 02:53:36 pause-647181 kubelet[3727]: E1006 02:53:36.065803    3727 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-647181&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 06 02:53:36 pause-647181 kubelet[3727]: I1006 02:53:36.406530    3727 kubelet_node_status.go:70] "Attempting to register node" node="pause-647181"
	Oct 06 02:53:41 pause-647181 kubelet[3727]: I1006 02:53:41.866794    3727 apiserver.go:52] "Watching apiserver"
	Oct 06 02:53:41 pause-647181 kubelet[3727]: I1006 02:53:41.873420    3727 topology_manager.go:215] "Topology Admit Handler" podUID="eec193c3-c96f-43e8-a0c3-e0964c0c7b51" podNamespace="kube-system" podName="kube-proxy-9vvq2"
	Oct 06 02:53:41 pause-647181 kubelet[3727]: I1006 02:53:41.873542    3727 topology_manager.go:215] "Topology Admit Handler" podUID="5582861e-0771-4be9-85fa-3194c946e4bc" podNamespace="kube-system" podName="coredns-5dd5756b68-qvjlc"
	Oct 06 02:53:41 pause-647181 kubelet[3727]: I1006 02:53:41.873614    3727 topology_manager.go:215] "Topology Admit Handler" podUID="006acf6b-70f8-4596-8965-0f13beb4fff6" podNamespace="kube-system" podName="kindnet-5zz7b"
	Oct 06 02:53:41 pause-647181 kubelet[3727]: I1006 02:53:41.974182    3727 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.000575    3727 kubelet_node_status.go:108] "Node was previously registered" node="pause-647181"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.000950    3727 kubelet_node_status.go:73] "Successfully registered node" node="pause-647181"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.006668    3727 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.007625    3727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.068412    3727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/006acf6b-70f8-4596-8965-0f13beb4fff6-cni-cfg\") pod \"kindnet-5zz7b\" (UID: \"006acf6b-70f8-4596-8965-0f13beb4fff6\") " pod="kube-system/kindnet-5zz7b"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.068490    3727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eec193c3-c96f-43e8-a0c3-e0964c0c7b51-xtables-lock\") pod \"kube-proxy-9vvq2\" (UID: \"eec193c3-c96f-43e8-a0c3-e0964c0c7b51\") " pod="kube-system/kube-proxy-9vvq2"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.068522    3727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/006acf6b-70f8-4596-8965-0f13beb4fff6-xtables-lock\") pod \"kindnet-5zz7b\" (UID: \"006acf6b-70f8-4596-8965-0f13beb4fff6\") " pod="kube-system/kindnet-5zz7b"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.068593    3727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/006acf6b-70f8-4596-8965-0f13beb4fff6-lib-modules\") pod \"kindnet-5zz7b\" (UID: \"006acf6b-70f8-4596-8965-0f13beb4fff6\") " pod="kube-system/kindnet-5zz7b"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.068630    3727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eec193c3-c96f-43e8-a0c3-e0964c0c7b51-lib-modules\") pod \"kube-proxy-9vvq2\" (UID: \"eec193c3-c96f-43e8-a0c3-e0964c0c7b51\") " pod="kube-system/kube-proxy-9vvq2"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.174651    3727 scope.go:117] "RemoveContainer" containerID="1f7d2ed33dd3634ef3dd4898e89ea5b1a30b16dde68873ab0ec0afdcba6d3efd"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.175544    3727 scope.go:117] "RemoveContainer" containerID="0cf5928c77815df3b44ae491c77e3730749b2dc1ec4b9c6754f76e504722f1c4"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.176785    3727 scope.go:117] "RemoveContainer" containerID="5481ab047c1f37fc36bbac6bffa2691ed4776094a359b4a13151405f76d97cb7"
	Oct 06 02:53:49 pause-647181 kubelet[3727]: I1006 02:53:49.385808    3727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-647181 -n pause-647181
helpers_test.go:261: (dbg) Run:  kubectl --context pause-647181 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-647181
helpers_test.go:235: (dbg) docker inspect pause-647181:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82",
	        "Created": "2023-10-06T02:51:15.176736064Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2392407,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-06T02:51:15.609872109Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82/hostname",
	        "HostsPath": "/var/lib/docker/containers/d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82/hosts",
	        "LogPath": "/var/lib/docker/containers/d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82/d963a8039ef745e725d58e86c36b0b62a1c7fdb86c367a3dda708055b98aab82-json.log",
	        "Name": "/pause-647181",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-647181:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-647181",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/85cfb0b01ca49359273815724ec03173a1977160157825a7df07e9dfab7ef3b8-init/diff:/var/lib/docker/overlay2/ab4f4fc5e8cd2d4bbf1718e21432b9cb0d953b7279be1c1cbb7bd550f03b46dc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85cfb0b01ca49359273815724ec03173a1977160157825a7df07e9dfab7ef3b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85cfb0b01ca49359273815724ec03173a1977160157825a7df07e9dfab7ef3b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85cfb0b01ca49359273815724ec03173a1977160157825a7df07e9dfab7ef3b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-647181",
	                "Source": "/var/lib/docker/volumes/pause-647181/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-647181",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-647181",
	                "name.minikube.sigs.k8s.io": "pause-647181",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9ee03075612006d4af3582c8b04a2cbedfe033ecb46d1e46a79dd409b1fff037",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35455"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35451"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35452"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9ee030756120",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-647181": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d963a8039ef7",
	                        "pause-647181"
	                    ],
	                    "NetworkID": "0626abfe2f4457ad5c980476dc74b843a105e00e3a82a4ae05090fa29c092936",
	                    "EndpointID": "0f2153e88e1c0c0f9e7cf6122748de573c565f656e3f28f5807a95b3a9e51a6d",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-647181 -n pause-647181
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-647181 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-647181 logs -n 25: (2.73089707s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl status docker --all                        |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl cat docker                                 |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /etc/docker/daemon.json                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo docker                         | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | system info                                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl status cri-docker                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | cri-dockerd --version                                |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl status containerd                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl cat containerd                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo cat                            | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | containerd config dump                               |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl status crio --all                          |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo                                | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo find                           | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-084205 sudo crio                           | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC |                     |
	|         | config                                               |                          |         |         |                     |                     |
	| delete  | -p cilium-084205                                     | cilium-084205            | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC | 06 Oct 23 02:52 UTC |
	| start   | -p force-systemd-env-836004                          | force-systemd-env-836004 | jenkins | v1.31.2 | 06 Oct 23 02:52 UTC | 06 Oct 23 02:53 UTC |
	|         | --memory=2048                                        |                          |         |         |                     |                     |
	|         | --alsologtostderr                                    |                          |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-836004                          | force-systemd-env-836004 | jenkins | v1.31.2 | 06 Oct 23 02:53 UTC | 06 Oct 23 02:53 UTC |
	| start   | -p cert-expiration-885413                            | cert-expiration-885413   | jenkins | v1.31.2 | 06 Oct 23 02:53 UTC |                     |
	|         | --memory=2048                                        |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                          |         |         |                     |                     |
	|         | --driver=docker                                      |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 02:53:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 02:53:35.736231 2405077 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:53:35.736431 2405077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:53:35.736436 2405077 out.go:309] Setting ErrFile to fd 2...
	I1006 02:53:35.736441 2405077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:53:35.736683 2405077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:53:35.737058 2405077 out.go:303] Setting JSON to false
	I1006 02:53:35.743555 2405077 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":45362,"bootTime":1696515454,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:53:35.743633 2405077 start.go:138] virtualization:  
	I1006 02:53:35.747517 2405077 out.go:177] * [cert-expiration-885413] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1006 02:53:35.750397 2405077 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 02:53:35.752351 2405077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:53:35.750495 2405077 notify.go:220] Checking for updates...
	I1006 02:53:35.754766 2405077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:53:35.757067 2405077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:53:35.759203 2405077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 02:53:35.761385 2405077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 02:53:35.763856 2405077 config.go:182] Loaded profile config "pause-647181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:53:35.763946 2405077 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:53:35.825367 2405077 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:53:35.825458 2405077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:53:35.964741 2405077 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-06 02:53:35.953220454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:53:35.964853 2405077 docker.go:295] overlay module found
	I1006 02:53:35.968556 2405077 out.go:177] * Using the docker driver based on user configuration
	I1006 02:53:35.970539 2405077 start.go:298] selected driver: docker
	I1006 02:53:35.970548 2405077 start.go:902] validating driver "docker" against <nil>
	I1006 02:53:35.970560 2405077 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 02:53:35.971199 2405077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:53:36.143119 2405077 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-06 02:53:36.130389669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:53:36.143257 2405077 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1006 02:53:36.143503 2405077 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 02:53:36.146075 2405077 out.go:177] * Using Docker driver with root privileges
	I1006 02:53:36.148191 2405077 cni.go:84] Creating CNI manager for ""
	I1006 02:53:36.148221 2405077 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:53:36.148239 2405077 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 02:53:36.148249 2405077 start_flags.go:323] config:
	{Name:cert-expiration-885413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:cert-expiration-885413 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:53:36.150723 2405077 out.go:177] * Starting control plane node cert-expiration-885413 in cluster cert-expiration-885413
	I1006 02:53:36.153150 2405077 cache.go:122] Beginning downloading kic base image for docker with crio
	I1006 02:53:36.155113 2405077 out.go:177] * Pulling base image ...
	I1006 02:53:36.157001 2405077 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:53:36.157201 2405077 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1006 02:53:36.157716 2405077 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1006 02:53:36.157725 2405077 cache.go:57] Caching tarball of preloaded images
	I1006 02:53:36.157798 2405077 preload.go:174] Found /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1006 02:53:36.157804 2405077 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1006 02:53:36.157904 2405077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/config.json ...
	I1006 02:53:36.157920 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/config.json: {Name:mk44694d6e937d667332cb1b26aad1b2fc901feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:36.188984 2405077 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1006 02:53:36.189002 2405077 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1006 02:53:36.189018 2405077 cache.go:195] Successfully downloaded all kic artifacts
	I1006 02:53:36.189092 2405077 start.go:365] acquiring machines lock for cert-expiration-885413: {Name:mkfcc9d140c46c3c2d732d336710162d1a815c7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 02:53:36.189213 2405077 start.go:369] acquired machines lock for "cert-expiration-885413" in 104.009µs
	I1006 02:53:36.189236 2405077 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-885413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:cert-expiration-885413 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 02:53:36.189313 2405077 start.go:125] createHost starting for "" (driver="docker")
	I1006 02:53:36.075690 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:53:36.108214 2399191 api_server.go:72] duration metric: took 1.14071579s to wait for apiserver process to appear ...
	I1006 02:53:36.108245 2399191 api_server.go:88] waiting for apiserver healthz status ...
	I1006 02:53:36.108278 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:36.108547 2399191 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1006 02:53:36.108576 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:36.108738 2399191 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1006 02:53:36.609248 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:36.192894 2405077 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1006 02:53:36.193154 2405077 start.go:159] libmachine.API.Create for "cert-expiration-885413" (driver="docker")
	I1006 02:53:36.193193 2405077 client.go:168] LocalClient.Create starting
	I1006 02:53:36.193259 2405077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem
	I1006 02:53:36.193291 2405077 main.go:141] libmachine: Decoding PEM data...
	I1006 02:53:36.193306 2405077 main.go:141] libmachine: Parsing certificate...
	I1006 02:53:36.193362 2405077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem
	I1006 02:53:36.193378 2405077 main.go:141] libmachine: Decoding PEM data...
	I1006 02:53:36.193390 2405077 main.go:141] libmachine: Parsing certificate...
	I1006 02:53:36.193748 2405077 cli_runner.go:164] Run: docker network inspect cert-expiration-885413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 02:53:36.221930 2405077 cli_runner.go:211] docker network inspect cert-expiration-885413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 02:53:36.222012 2405077 network_create.go:281] running [docker network inspect cert-expiration-885413] to gather additional debugging logs...
	I1006 02:53:36.222032 2405077 cli_runner.go:164] Run: docker network inspect cert-expiration-885413
	W1006 02:53:36.249849 2405077 cli_runner.go:211] docker network inspect cert-expiration-885413 returned with exit code 1
	I1006 02:53:36.249897 2405077 network_create.go:284] error running [docker network inspect cert-expiration-885413]: docker network inspect cert-expiration-885413: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-885413 not found
	I1006 02:53:36.249909 2405077 network_create.go:286] output of [docker network inspect cert-expiration-885413]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-885413 not found
	
	** /stderr **
	I1006 02:53:36.250019 2405077 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:53:36.286688 2405077 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-23fd96ce330f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5d:0d:78:1a} reservation:<nil>}
	I1006 02:53:36.287157 2405077 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8cf15a65a1dd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:06:08:d3:35} reservation:<nil>}
	I1006 02:53:36.287667 2405077 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0626abfe2f44 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:84:59:a7:45} reservation:<nil>}
	I1006 02:53:36.288335 2405077 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002614db0}
	I1006 02:53:36.288371 2405077 network_create.go:124] attempt to create docker network cert-expiration-885413 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1006 02:53:36.288441 2405077 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-885413 cert-expiration-885413
	I1006 02:53:36.386622 2405077 network_create.go:108] docker network cert-expiration-885413 192.168.76.0/24 created
	I1006 02:53:36.386642 2405077 kic.go:118] calculated static IP "192.168.76.2" for the "cert-expiration-885413" container
	I1006 02:53:36.386715 2405077 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 02:53:36.410801 2405077 cli_runner.go:164] Run: docker volume create cert-expiration-885413 --label name.minikube.sigs.k8s.io=cert-expiration-885413 --label created_by.minikube.sigs.k8s.io=true
	I1006 02:53:36.440765 2405077 oci.go:103] Successfully created a docker volume cert-expiration-885413
	I1006 02:53:36.440848 2405077 cli_runner.go:164] Run: docker run --rm --name cert-expiration-885413-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-885413 --entrypoint /usr/bin/test -v cert-expiration-885413:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1006 02:53:37.230016 2405077 oci.go:107] Successfully prepared a docker volume cert-expiration-885413
	I1006 02:53:37.230051 2405077 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:53:37.230068 2405077 kic.go:191] Starting extracting preloaded images to volume ...
	I1006 02:53:37.230157 2405077 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-885413:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 02:53:41.609651 2399191 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1006 02:53:41.609692 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:41.901879 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:41.901909 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:41.901926 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:41.912718 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:41.912753 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:42.108884 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:42.132565 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:42.132597 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:42.609872 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:42.630319 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:42.630345 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:43.110627 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:43.142092 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:43.142116 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:43.609334 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:43.629903 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1006 02:53:43.629986 2399191 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1006 02:53:44.109232 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:44.131027 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1006 02:53:44.152108 2399191 api_server.go:141] control plane version: v1.28.2
	I1006 02:53:44.152146 2399191 api_server.go:131] duration metric: took 8.043892733s to wait for apiserver health ...
	I1006 02:53:44.152157 2399191 cni.go:84] Creating CNI manager for ""
	I1006 02:53:44.152170 2399191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:53:44.154711 2399191 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1006 02:53:44.156991 2399191 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1006 02:53:44.174968 2399191 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1006 02:53:44.174988 2399191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1006 02:53:44.219173 2399191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1006 02:53:45.260624 2399191 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.041361101s)
	I1006 02:53:45.260679 2399191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 02:53:45.271930 2399191 system_pods.go:59] 7 kube-system pods found
	I1006 02:53:45.271980 2399191 system_pods.go:61] "coredns-5dd5756b68-qvjlc" [5582861e-0771-4be9-85fa-3194c946e4bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 02:53:45.271990 2399191 system_pods.go:61] "etcd-pause-647181" [2278dbab-5509-4155-ab45-4bbc4a195cea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 02:53:45.271999 2399191 system_pods.go:61] "kindnet-5zz7b" [006acf6b-70f8-4596-8965-0f13beb4fff6] Running
	I1006 02:53:45.272006 2399191 system_pods.go:61] "kube-apiserver-pause-647181" [d0788f9d-f101-42ba-9145-4435f137fa8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 02:53:45.272036 2399191 system_pods.go:61] "kube-controller-manager-pause-647181" [255471e6-b5e4-4d3e-b091-02979c39ab2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 02:53:45.272042 2399191 system_pods.go:61] "kube-proxy-9vvq2" [eec193c3-c96f-43e8-a0c3-e0964c0c7b51] Running
	I1006 02:53:45.272051 2399191 system_pods.go:61] "kube-scheduler-pause-647181" [cb621118-2a4c-4ed4-88ad-f64283986b73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 02:53:45.272058 2399191 system_pods.go:74] duration metric: took 11.353293ms to wait for pod list to return data ...
	I1006 02:53:45.272067 2399191 node_conditions.go:102] verifying NodePressure condition ...
	I1006 02:53:45.275742 2399191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 02:53:45.275780 2399191 node_conditions.go:123] node cpu capacity is 2
	I1006 02:53:45.275792 2399191 node_conditions.go:105] duration metric: took 3.720589ms to run NodePressure ...
	I1006 02:53:45.275815 2399191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 02:53:45.497975 2399191 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1006 02:53:45.506362 2399191 kubeadm.go:787] kubelet initialised
	I1006 02:53:45.506385 2399191 kubeadm.go:788] duration metric: took 8.350181ms waiting for restarted kubelet to initialise ...
	I1006 02:53:45.506395 2399191 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:53:45.512441 2399191 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:42.096404 2405077 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-885413:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.866192142s)
	I1006 02:53:42.096423 2405077 kic.go:200] duration metric: took 4.866352 seconds to extract preloaded images to volume
	W1006 02:53:42.096584 2405077 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 02:53:42.096701 2405077 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 02:53:42.264069 2405077 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-885413 --name cert-expiration-885413 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-885413 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-885413 --network cert-expiration-885413 --ip 192.168.76.2 --volume cert-expiration-885413:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1006 02:53:42.837568 2405077 cli_runner.go:164] Run: docker container inspect cert-expiration-885413 --format={{.State.Running}}
	I1006 02:53:42.866448 2405077 cli_runner.go:164] Run: docker container inspect cert-expiration-885413 --format={{.State.Status}}
	I1006 02:53:42.895959 2405077 cli_runner.go:164] Run: docker exec cert-expiration-885413 stat /var/lib/dpkg/alternatives/iptables
	I1006 02:53:43.035500 2405077 oci.go:144] the created container "cert-expiration-885413" has a running status.
	I1006 02:53:43.035518 2405077 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa...
	I1006 02:53:43.473606 2405077 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 02:53:43.510827 2405077 cli_runner.go:164] Run: docker container inspect cert-expiration-885413 --format={{.State.Status}}
	I1006 02:53:43.540123 2405077 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 02:53:43.540135 2405077 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-885413 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 02:53:43.644357 2405077 cli_runner.go:164] Run: docker container inspect cert-expiration-885413 --format={{.State.Status}}
	I1006 02:53:43.686525 2405077 machine.go:88] provisioning docker machine ...
	I1006 02:53:43.686546 2405077 ubuntu.go:169] provisioning hostname "cert-expiration-885413"
	I1006 02:53:43.686608 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:43.719942 2405077 main.go:141] libmachine: Using SSH client type: native
	I1006 02:53:43.720373 2405077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35470 <nil> <nil>}
	I1006 02:53:43.720384 2405077 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-885413 && echo "cert-expiration-885413" | sudo tee /etc/hostname
	I1006 02:53:43.720989 2405077 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42208->127.0.0.1:35470: read: connection reset by peer
	I1006 02:53:46.870925 2405077 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-885413
	
	I1006 02:53:46.871003 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:46.896957 2405077 main.go:141] libmachine: Using SSH client type: native
	I1006 02:53:46.897364 2405077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35470 <nil> <nil>}
	I1006 02:53:46.897380 2405077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-885413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-885413/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-885413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 02:53:47.028614 2405077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 02:53:47.028631 2405077 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17314-2262959/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-2262959/.minikube}
	I1006 02:53:47.028650 2405077 ubuntu.go:177] setting up certificates
	I1006 02:53:47.028658 2405077 provision.go:83] configureAuth start
	I1006 02:53:47.028716 2405077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-885413
	I1006 02:53:47.063445 2405077 provision.go:138] copyHostCerts
	I1006 02:53:47.063519 2405077 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem, removing ...
	I1006 02:53:47.063530 2405077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem
	I1006 02:53:47.063648 2405077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.pem (1082 bytes)
	I1006 02:53:47.063762 2405077 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem, removing ...
	I1006 02:53:47.063772 2405077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem
	I1006 02:53:47.063817 2405077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/cert.pem (1123 bytes)
	I1006 02:53:47.063913 2405077 exec_runner.go:144] found /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem, removing ...
	I1006 02:53:47.063916 2405077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem
	I1006 02:53:47.063945 2405077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-2262959/.minikube/key.pem (1675 bytes)
	I1006 02:53:47.064012 2405077 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-885413 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube cert-expiration-885413]
	I1006 02:53:48.059635 2405077 provision.go:172] copyRemoteCerts
	I1006 02:53:48.059710 2405077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 02:53:48.059779 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:48.080786 2405077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35470 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa Username:docker}
	I1006 02:53:48.178185 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 02:53:48.207345 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1006 02:53:48.236906 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 02:53:48.268190 2405077 provision.go:86] duration metric: configureAuth took 1.239519759s
	I1006 02:53:48.268206 2405077 ubuntu.go:193] setting minikube options for container-runtime
	I1006 02:53:48.268389 2405077 config.go:182] Loaded profile config "cert-expiration-885413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:53:48.268499 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:48.287707 2405077 main.go:141] libmachine: Using SSH client type: native
	I1006 02:53:48.288125 2405077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 35470 <nil> <nil>}
	I1006 02:53:48.288138 2405077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1006 02:53:48.546423 2405077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1006 02:53:48.546436 2405077 machine.go:91] provisioned docker machine in 4.859899606s
	I1006 02:53:48.546443 2405077 client.go:171] LocalClient.Create took 12.353246432s
	I1006 02:53:48.546454 2405077 start.go:167] duration metric: libmachine.API.Create for "cert-expiration-885413" took 12.353301875s
	I1006 02:53:48.546461 2405077 start.go:300] post-start starting for "cert-expiration-885413" (driver="docker")
	I1006 02:53:48.546470 2405077 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 02:53:48.546539 2405077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 02:53:48.546577 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:48.565623 2405077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35470 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa Username:docker}
	I1006 02:53:48.662428 2405077 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 02:53:48.666712 2405077 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 02:53:48.666739 2405077 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1006 02:53:48.666752 2405077 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1006 02:53:48.666758 2405077 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1006 02:53:48.666768 2405077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/addons for local assets ...
	I1006 02:53:48.666838 2405077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-2262959/.minikube/files for local assets ...
	I1006 02:53:48.666916 2405077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem -> 22683062.pem in /etc/ssl/certs
	I1006 02:53:48.667030 2405077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1006 02:53:48.677676 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:53:48.707706 2405077 start.go:303] post-start completed in 161.230835ms
	I1006 02:53:48.708082 2405077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-885413
	I1006 02:53:48.726129 2405077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/config.json ...
	I1006 02:53:48.726403 2405077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:53:48.726446 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:48.745418 2405077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35470 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa Username:docker}
	I1006 02:53:48.841457 2405077 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 02:53:48.848005 2405077 start.go:128] duration metric: createHost completed in 12.658666251s
	I1006 02:53:48.848023 2405077 start.go:83] releasing machines lock for "cert-expiration-885413", held for 12.658803926s
	I1006 02:53:48.848111 2405077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-885413
	I1006 02:53:48.867474 2405077 ssh_runner.go:195] Run: cat /version.json
	I1006 02:53:48.867514 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:48.867783 2405077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 02:53:48.867858 2405077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-885413
	I1006 02:53:48.888489 2405077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35470 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa Username:docker}
	I1006 02:53:48.910596 2405077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35470 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/cert-expiration-885413/id_rsa Username:docker}
	I1006 02:53:48.983565 2405077 ssh_runner.go:195] Run: systemctl --version
	I1006 02:53:49.124023 2405077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1006 02:53:49.272586 2405077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1006 02:53:49.277974 2405077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:53:49.303481 2405077 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1006 02:53:49.303554 2405077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 02:53:49.345680 2405077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1006 02:53:49.345693 2405077 start.go:472] detecting cgroup driver to use...
	I1006 02:53:49.345725 2405077 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1006 02:53:49.345777 2405077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 02:53:49.364476 2405077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 02:53:49.378446 2405077 docker.go:198] disabling cri-docker service (if available) ...
	I1006 02:53:49.378501 2405077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 02:53:49.397918 2405077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 02:53:49.417318 2405077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 02:53:49.517598 2405077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 02:53:49.639706 2405077 docker.go:214] disabling docker service ...
	I1006 02:53:49.639783 2405077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 02:53:49.664397 2405077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 02:53:49.679019 2405077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 02:53:49.779803 2405077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 02:53:49.897019 2405077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 02:53:49.913676 2405077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 02:53:49.934510 2405077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1006 02:53:49.934568 2405077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:53:49.947817 2405077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1006 02:53:49.947894 2405077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:53:49.961100 2405077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:53:49.974165 2405077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1006 02:53:49.988718 2405077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 02:53:50.014719 2405077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 02:53:50.026707 2405077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 02:53:50.038566 2405077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 02:53:50.147283 2405077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1006 02:53:50.284060 2405077 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1006 02:53:50.284123 2405077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1006 02:53:50.291083 2405077 start.go:540] Will wait 60s for crictl version
	I1006 02:53:50.291150 2405077 ssh_runner.go:195] Run: which crictl
	I1006 02:53:50.296112 2405077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 02:53:50.351375 2405077 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1006 02:53:50.351455 2405077 ssh_runner.go:195] Run: crio --version
	I1006 02:53:50.398051 2405077 ssh_runner.go:195] Run: crio --version
	I1006 02:53:50.453705 2405077 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1006 02:53:47.532776 2399191 pod_ready.go:102] pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace has status "Ready":"False"
	I1006 02:53:49.533221 2399191 pod_ready.go:92] pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:49.533240 2399191 pod_ready.go:81] duration metric: took 4.020762669s waiting for pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:49.533251 2399191 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:49.541162 2399191 pod_ready.go:92] pod "etcd-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:49.541231 2399191 pod_ready.go:81] duration metric: took 7.96315ms waiting for pod "etcd-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:49.541260 2399191 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:50.455709 2405077 cli_runner.go:164] Run: docker network inspect cert-expiration-885413 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 02:53:50.473379 2405077 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1006 02:53:50.478056 2405077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 02:53:50.492957 2405077 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:53:50.493011 2405077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 02:53:50.566160 2405077 crio.go:496] all images are preloaded for cri-o runtime.
	I1006 02:53:50.566171 2405077 crio.go:415] Images already preloaded, skipping extraction
	I1006 02:53:50.566237 2405077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 02:53:50.610992 2405077 crio.go:496] all images are preloaded for cri-o runtime.
	I1006 02:53:50.611003 2405077 cache_images.go:84] Images are preloaded, skipping loading
	I1006 02:53:50.611137 2405077 ssh_runner.go:195] Run: crio config
	I1006 02:53:50.667906 2405077 cni.go:84] Creating CNI manager for ""
	I1006 02:53:50.667917 2405077 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:53:50.667939 2405077 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1006 02:53:50.667957 2405077 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-885413 NodeName:cert-expiration-885413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 02:53:50.668085 2405077 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-885413"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 02:53:50.668160 2405077 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=cert-expiration-885413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:cert-expiration-885413 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1006 02:53:50.668223 2405077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1006 02:53:50.679723 2405077 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 02:53:50.679793 2405077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 02:53:50.690549 2405077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1006 02:53:50.711802 2405077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 02:53:50.733502 2405077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1006 02:53:51.567383 2399191 pod_ready.go:102] pod "kube-apiserver-pause-647181" in "kube-system" namespace has status "Ready":"False"
	I1006 02:53:52.566289 2399191 pod_ready.go:92] pod "kube-apiserver-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:52.566311 2399191 pod_ready.go:81] duration metric: took 3.025028247s waiting for pod "kube-apiserver-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:52.566323 2399191 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:53.587501 2399191 pod_ready.go:92] pod "kube-controller-manager-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:53.587572 2399191 pod_ready.go:81] duration metric: took 1.021241s waiting for pod "kube-controller-manager-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:53.587613 2399191 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9vvq2" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:53.665492 2399191 pod_ready.go:92] pod "kube-proxy-9vvq2" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:53.665561 2399191 pod_ready.go:81] duration metric: took 77.922111ms waiting for pod "kube-proxy-9vvq2" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:53.665585 2399191 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:54.066851 2399191 pod_ready.go:92] pod "kube-scheduler-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:54.067004 2399191 pod_ready.go:81] duration metric: took 401.387707ms waiting for pod "kube-scheduler-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:54.067031 2399191 pod_ready.go:38] duration metric: took 8.560625129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:53:54.067102 2399191 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 02:53:54.083296 2399191 ops.go:34] apiserver oom_adj: -16
	I1006 02:53:54.083513 2399191 kubeadm.go:640] restartCluster took 1m6.931273862s
	I1006 02:53:54.083541 2399191 kubeadm.go:406] StartCluster complete in 1m7.04546831s
	I1006 02:53:54.083592 2399191 settings.go:142] acquiring lock: {Name:mkbf8759b61c125112e0d07f4c53bb4e84a6de4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:54.083808 2399191 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:53:54.085262 2399191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/kubeconfig: {Name:mkf2ae4867a3638ac11dac5beae6919a4f83b43a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:54.085709 2399191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 02:53:54.086445 2399191 config.go:182] Loaded profile config "pause-647181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:53:54.086587 2399191 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1006 02:53:54.089245 2399191 out.go:177] * Enabled addons: 
	I1006 02:53:54.088072 2399191 kapi.go:59] client config for pause-647181: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.crt", KeyFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.key", CAFile:"/home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a24a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1006 02:53:54.092384 2399191 addons.go:502] enable addons completed in 5.7975ms: enabled=[]
	I1006 02:53:54.096110 2399191 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-647181" context rescaled to 1 replicas
	I1006 02:53:54.096165 2399191 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1006 02:53:54.098257 2399191 out.go:177] * Verifying Kubernetes components...
	I1006 02:53:54.100537 2399191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:53:54.263527 2399191 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1006 02:53:54.263600 2399191 node_ready.go:35] waiting up to 6m0s for node "pause-647181" to be "Ready" ...
	I1006 02:53:54.268610 2399191 node_ready.go:49] node "pause-647181" has status "Ready":"True"
	I1006 02:53:54.268689 2399191 node_ready.go:38] duration metric: took 5.07505ms waiting for node "pause-647181" to be "Ready" ...
	I1006 02:53:54.268715 2399191 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:53:54.475622 2399191 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:54.865846 2399191 pod_ready.go:92] pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:54.865922 2399191 pod_ready.go:81] duration metric: took 390.233621ms waiting for pod "coredns-5dd5756b68-qvjlc" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:54.865958 2399191 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:55.264593 2399191 pod_ready.go:92] pod "etcd-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:55.264632 2399191 pod_ready.go:81] duration metric: took 398.648295ms waiting for pod "etcd-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:55.264662 2399191 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:50.755270 2405077 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1006 02:53:50.759715 2405077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 02:53:50.772847 2405077 certs.go:56] Setting up /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413 for IP: 192.168.76.2
	I1006 02:53:50.772867 2405077 certs.go:190] acquiring lock for shared ca certs: {Name:mk4532656c287f04abff160cc5263fddcb69ac4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:50.772996 2405077 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key
	I1006 02:53:50.773037 2405077 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key
	I1006 02:53:50.773084 2405077 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/client.key
	I1006 02:53:50.773093 2405077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/client.crt with IP's: []
	I1006 02:53:50.991971 2405077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/client.crt ...
	I1006 02:53:50.991992 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/client.crt: {Name:mk611f3886fd953cd3cf4b41020772de97a746bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:50.993488 2405077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/client.key ...
	I1006 02:53:50.993514 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/client.key: {Name:mk6bcb61dbd5d64963346b3ee83acb593d4e2699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:50.993659 2405077 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.key.31bdca25
	I1006 02:53:50.993674 2405077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1006 02:53:51.304114 2405077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.crt.31bdca25 ...
	I1006 02:53:51.304129 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.crt.31bdca25: {Name:mk1bdf1d5890ea05ae9a237ae709bf6659b2149d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:51.304323 2405077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.key.31bdca25 ...
	I1006 02:53:51.304331 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.key.31bdca25: {Name:mk5ed8e145784bed8fef156cd6a5c89a1b49de8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:51.304431 2405077 certs.go:337] copying /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.crt
	I1006 02:53:51.304507 2405077 certs.go:341] copying /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.key
	I1006 02:53:51.304558 2405077 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.key
	I1006 02:53:51.304570 2405077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.crt with IP's: []
	I1006 02:53:51.647885 2405077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.crt ...
	I1006 02:53:51.647898 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.crt: {Name:mk9696d0a1eb19a5be8f41d3614a368900e194d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:51.648089 2405077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.key ...
	I1006 02:53:51.648096 2405077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.key: {Name:mkd834bc83e0c5ff27f9947981c312bd4dc0e865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:53:51.648903 2405077 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem (1338 bytes)
	W1006 02:53:51.648941 2405077 certs.go:433] ignoring /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306_empty.pem, impossibly tiny 0 bytes
	I1006 02:53:51.648952 2405077 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 02:53:51.648976 2405077 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/ca.pem (1082 bytes)
	I1006 02:53:51.648998 2405077 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/cert.pem (1123 bytes)
	I1006 02:53:51.649020 2405077 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/certs/key.pem (1675 bytes)
	I1006 02:53:51.649067 2405077 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem (1708 bytes)
	I1006 02:53:51.649674 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1006 02:53:51.679192 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 02:53:51.709535 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 02:53:51.738049 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/cert-expiration-885413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 02:53:51.767065 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 02:53:51.796576 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 02:53:51.826160 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 02:53:51.856619 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 02:53:51.885574 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/certs/2268306.pem --> /usr/share/ca-certificates/2268306.pem (1338 bytes)
	I1006 02:53:51.915919 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/ssl/certs/22683062.pem --> /usr/share/ca-certificates/22683062.pem (1708 bytes)
	I1006 02:53:51.945934 2405077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 02:53:51.976110 2405077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 02:53:51.999826 2405077 ssh_runner.go:195] Run: openssl version
	I1006 02:53:52.007590 2405077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 02:53:52.020067 2405077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:53:52.025430 2405077 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  6 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:53:52.025495 2405077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 02:53:52.037439 2405077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 02:53:52.056447 2405077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2268306.pem && ln -fs /usr/share/ca-certificates/2268306.pem /etc/ssl/certs/2268306.pem"
	I1006 02:53:52.077891 2405077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2268306.pem
	I1006 02:53:52.086035 2405077 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  6 02:19 /usr/share/ca-certificates/2268306.pem
	I1006 02:53:52.086118 2405077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2268306.pem
	I1006 02:53:52.097252 2405077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2268306.pem /etc/ssl/certs/51391683.0"
	I1006 02:53:52.109376 2405077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22683062.pem && ln -fs /usr/share/ca-certificates/22683062.pem /etc/ssl/certs/22683062.pem"
	I1006 02:53:52.121918 2405077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22683062.pem
	I1006 02:53:52.126889 2405077 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  6 02:19 /usr/share/ca-certificates/22683062.pem
	I1006 02:53:52.126945 2405077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22683062.pem
	I1006 02:53:52.136128 2405077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22683062.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 02:53:52.149380 2405077 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1006 02:53:52.154145 2405077 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1006 02:53:52.154199 2405077 kubeadm.go:404] StartCluster: {Name:cert-expiration-885413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:cert-expiration-885413 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:53:52.154281 2405077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1006 02:53:52.154340 2405077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 02:53:52.197009 2405077 cri.go:89] found id: ""
	I1006 02:53:52.197071 2405077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 02:53:52.208186 2405077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 02:53:52.219206 2405077 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1006 02:53:52.219268 2405077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 02:53:52.230044 2405077 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 02:53:52.230079 2405077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 02:53:52.345311 2405077 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1006 02:53:52.430131 2405077 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 02:53:55.665133 2399191 pod_ready.go:92] pod "kube-apiserver-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:55.665167 2399191 pod_ready.go:81] duration metric: took 400.490903ms waiting for pod "kube-apiserver-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:55.665180 2399191 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.064620 2399191 pod_ready.go:92] pod "kube-controller-manager-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:56.064693 2399191 pod_ready.go:81] duration metric: took 399.50313ms waiting for pod "kube-controller-manager-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.064710 2399191 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9vvq2" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.468816 2399191 pod_ready.go:92] pod "kube-proxy-9vvq2" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:56.468843 2399191 pod_ready.go:81] duration metric: took 404.124547ms waiting for pod "kube-proxy-9vvq2" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.468866 2399191 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.865980 2399191 pod_ready.go:92] pod "kube-scheduler-pause-647181" in "kube-system" namespace has status "Ready":"True"
	I1006 02:53:56.866009 2399191 pod_ready.go:81] duration metric: took 397.135665ms waiting for pod "kube-scheduler-pause-647181" in "kube-system" namespace to be "Ready" ...
	I1006 02:53:56.866019 2399191 pod_ready.go:38] duration metric: took 2.597262607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 02:53:56.866034 2399191 api_server.go:52] waiting for apiserver process to appear ...
	I1006 02:53:56.866097 2399191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:53:56.881063 2399191 api_server.go:72] duration metric: took 2.784851851s to wait for apiserver process to appear ...
	I1006 02:53:56.881088 2399191 api_server.go:88] waiting for apiserver healthz status ...
	I1006 02:53:56.881105 2399191 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 02:53:56.890944 2399191 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1006 02:53:56.892803 2399191 api_server.go:141] control plane version: v1.28.2
	I1006 02:53:56.892835 2399191 api_server.go:131] duration metric: took 11.740115ms to wait for apiserver health ...
	I1006 02:53:56.892845 2399191 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 02:53:57.069255 2399191 system_pods.go:59] 7 kube-system pods found
	I1006 02:53:57.069337 2399191 system_pods.go:61] "coredns-5dd5756b68-qvjlc" [5582861e-0771-4be9-85fa-3194c946e4bc] Running
	I1006 02:53:57.069359 2399191 system_pods.go:61] "etcd-pause-647181" [2278dbab-5509-4155-ab45-4bbc4a195cea] Running
	I1006 02:53:57.069382 2399191 system_pods.go:61] "kindnet-5zz7b" [006acf6b-70f8-4596-8965-0f13beb4fff6] Running
	I1006 02:53:57.069420 2399191 system_pods.go:61] "kube-apiserver-pause-647181" [d0788f9d-f101-42ba-9145-4435f137fa8b] Running
	I1006 02:53:57.069452 2399191 system_pods.go:61] "kube-controller-manager-pause-647181" [255471e6-b5e4-4d3e-b091-02979c39ab2a] Running
	I1006 02:53:57.069473 2399191 system_pods.go:61] "kube-proxy-9vvq2" [eec193c3-c96f-43e8-a0c3-e0964c0c7b51] Running
	I1006 02:53:57.069511 2399191 system_pods.go:61] "kube-scheduler-pause-647181" [cb621118-2a4c-4ed4-88ad-f64283986b73] Running
	I1006 02:53:57.069539 2399191 system_pods.go:74] duration metric: took 176.68489ms to wait for pod list to return data ...
	I1006 02:53:57.069564 2399191 default_sa.go:34] waiting for default service account to be created ...
	I1006 02:53:57.263979 2399191 default_sa.go:45] found service account: "default"
	I1006 02:53:57.264049 2399191 default_sa.go:55] duration metric: took 194.449112ms for default service account to be created ...
	I1006 02:53:57.264074 2399191 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 02:53:57.469340 2399191 system_pods.go:86] 7 kube-system pods found
	I1006 02:53:57.469414 2399191 system_pods.go:89] "coredns-5dd5756b68-qvjlc" [5582861e-0771-4be9-85fa-3194c946e4bc] Running
	I1006 02:53:57.469453 2399191 system_pods.go:89] "etcd-pause-647181" [2278dbab-5509-4155-ab45-4bbc4a195cea] Running
	I1006 02:53:57.469482 2399191 system_pods.go:89] "kindnet-5zz7b" [006acf6b-70f8-4596-8965-0f13beb4fff6] Running
	I1006 02:53:57.469503 2399191 system_pods.go:89] "kube-apiserver-pause-647181" [d0788f9d-f101-42ba-9145-4435f137fa8b] Running
	I1006 02:53:57.469535 2399191 system_pods.go:89] "kube-controller-manager-pause-647181" [255471e6-b5e4-4d3e-b091-02979c39ab2a] Running
	I1006 02:53:57.469561 2399191 system_pods.go:89] "kube-proxy-9vvq2" [eec193c3-c96f-43e8-a0c3-e0964c0c7b51] Running
	I1006 02:53:57.469580 2399191 system_pods.go:89] "kube-scheduler-pause-647181" [cb621118-2a4c-4ed4-88ad-f64283986b73] Running
	I1006 02:53:57.469615 2399191 system_pods.go:126] duration metric: took 205.522424ms to wait for k8s-apps to be running ...
	I1006 02:53:57.469648 2399191 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 02:53:57.469734 2399191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:53:57.490000 2399191 system_svc.go:56] duration metric: took 20.348879ms WaitForService to wait for kubelet.
	I1006 02:53:57.490074 2399191 kubeadm.go:581] duration metric: took 3.393879604s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1006 02:53:57.490121 2399191 node_conditions.go:102] verifying NodePressure condition ...
	I1006 02:53:57.665418 2399191 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 02:53:57.665492 2399191 node_conditions.go:123] node cpu capacity is 2
	I1006 02:53:57.665516 2399191 node_conditions.go:105] duration metric: took 175.362394ms to run NodePressure ...
	I1006 02:53:57.665541 2399191 start.go:228] waiting for startup goroutines ...
	I1006 02:53:57.665575 2399191 start.go:233] waiting for cluster config update ...
	I1006 02:53:57.665601 2399191 start.go:242] writing updated cluster config ...
	I1006 02:53:57.665988 2399191 ssh_runner.go:195] Run: rm -f paused
	I1006 02:53:57.749083 2399191 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1006 02:53:57.752833 2399191 out.go:177] * Done! kubectl is now configured to use "pause-647181" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.193233511Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-qvjlc/coredns" id=cb852b53-b0df-4325-a85a-5eaf8054fd98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.193336593Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.240434039Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0371481bdf1e400611c971aca35803110cdca48f03942d685f0477c280f68930/merged/etc/passwd: no such file or directory"
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.240652453Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0371481bdf1e400611c971aca35803110cdca48f03942d685f0477c280f68930/merged/etc/group: no such file or directory"
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.449315186Z" level=info msg="Created container e6461d2a0e977c3051c8138bd4506ebdcbf46d670d3499760ac6180283967197: kube-system/coredns-5dd5756b68-qvjlc/coredns" id=cb852b53-b0df-4325-a85a-5eaf8054fd98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.450582427Z" level=info msg="Starting container: e6461d2a0e977c3051c8138bd4506ebdcbf46d670d3499760ac6180283967197" id=0b837981-6fff-40f0-a2f9-421d76d0313c name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.470893276Z" level=info msg="Started container" PID=3992 containerID=e6461d2a0e977c3051c8138bd4506ebdcbf46d670d3499760ac6180283967197 description=kube-system/coredns-5dd5756b68-qvjlc/coredns id=0b837981-6fff-40f0-a2f9-421d76d0313c name=/runtime.v1.RuntimeService/StartContainer sandboxID=b50c32f244e573157908df70194d395473663708f3c47b6310cbdbf9bee5ff91
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.514936015Z" level=info msg="Created container a75615795b0c938988076ada65f9dde5c43bd58a436d4cdfb9e73713f8b80405: kube-system/kindnet-5zz7b/kindnet-cni" id=4a61d2f7-20ee-41a8-be1c-d2ed92e255df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.516084051Z" level=info msg="Starting container: a75615795b0c938988076ada65f9dde5c43bd58a436d4cdfb9e73713f8b80405" id=fa8beab8-65cb-4469-9f0a-60c36247ac03 name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.540162316Z" level=info msg="Started container" PID=3979 containerID=a75615795b0c938988076ada65f9dde5c43bd58a436d4cdfb9e73713f8b80405 description=kube-system/kindnet-5zz7b/kindnet-cni id=fa8beab8-65cb-4469-9f0a-60c36247ac03 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a34392daaf3058623479fc0d80145d7832dfe21eb0d18f7962afab42660bf22
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.627262395Z" level=info msg="Created container 0651a743420f5675b7b2f78756250ac69a5295917655f1ef7455ce8e1c136edd: kube-system/kube-proxy-9vvq2/kube-proxy" id=8f5a4e70-e765-4b80-8eed-17e404e694f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.627988323Z" level=info msg="Starting container: 0651a743420f5675b7b2f78756250ac69a5295917655f1ef7455ce8e1c136edd" id=2450f670-33c8-4f52-b165-64a04e8cff4e name=/runtime.v1.RuntimeService/StartContainer
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.672099560Z" level=info msg="Started container" PID=3981 containerID=0651a743420f5675b7b2f78756250ac69a5295917655f1ef7455ce8e1c136edd description=kube-system/kube-proxy-9vvq2/kube-proxy id=2450f670-33c8-4f52-b165-64a04e8cff4e name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e78cd75160e76cccdaa932000b542af1336bcbfb8f705e538c5b0ae4b55d0e1
	Oct 06 02:53:42 pause-647181 crio[2592]: time="2023-10-06 02:53:42.973082483Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.041226584Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.041262392Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.041278482Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.079382262Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.079420039Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.079437541Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.093591262Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.093624387Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.093640256Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.133020179Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 06 02:53:43 pause-647181 crio[2592]: time="2023-10-06 02:53:43.133052048Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e6461d2a0e977       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   21 seconds ago       Running             coredns                   3                   b50c32f244e57       coredns-5dd5756b68-qvjlc
	0651a743420f5       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   21 seconds ago       Running             kube-proxy                3                   1e78cd75160e7       kube-proxy-9vvq2
	a75615795b0c9       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   21 seconds ago       Running             kindnet-cni               3                   6a34392daaf30       kindnet-5zz7b
	49ce28bf9b8d6       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   27 seconds ago       Running             kube-controller-manager   3                   92e0672bb5ea7       kube-controller-manager-pause-647181
	5d6de69c7f93e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   27 seconds ago       Running             etcd                      3                   b6c9c6e9b443f       etcd-pause-647181
	65455e488da1a       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   28 seconds ago       Running             kube-apiserver            3                   2d20e802177f3       kube-apiserver-pause-647181
	d5d42bb4b86d0       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   28 seconds ago       Running             kube-scheduler            3                   32a3093f8b26d       kube-scheduler-pause-647181
	0c7033aa51936       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   55 seconds ago       Exited              kube-scheduler            2                   32a3093f8b26d       kube-scheduler-pause-647181
	1f627028913f8       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   55 seconds ago       Exited              kube-controller-manager   2                   92e0672bb5ea7       kube-controller-manager-pause-647181
	5481ab047c1f3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   55 seconds ago       Exited              coredns                   2                   b50c32f244e57       coredns-5dd5756b68-qvjlc
	614ae4c2dae8d       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   56 seconds ago       Exited              etcd                      2                   b6c9c6e9b443f       etcd-pause-647181
	1f7d2ed33dd36       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   About a minute ago   Exited              kindnet-cni               2                   6a34392daaf30       kindnet-5zz7b
	0cf5928c77815       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   About a minute ago   Exited              kube-proxy                2                   1e78cd75160e7       kube-proxy-9vvq2
	4e835c4e54a76       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   About a minute ago   Exited              kube-apiserver            2                   2d20e802177f3       kube-apiserver-pause-647181
	
	* 
	* ==> coredns [5481ab047c1f37fc36bbac6bffa2691ed4776094a359b4a13151405f76d97cb7] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41771 - 40892 "HINFO IN 7976085451383260252.3804424261778561535. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026451813s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [e6461d2a0e977c3051c8138bd4506ebdcbf46d670d3499760ac6180283967197] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34437 - 15200 "HINFO IN 221119102785819024.72943900817473966. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.01347316s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-647181
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-647181
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=84890cb24d0240d9d992d7c7712ee519ceed4154
	                    minikube.k8s.io/name=pause-647181
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_06T02_51_48_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Oct 2023 02:51:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-647181
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Oct 2023 02:54:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Oct 2023 02:53:42 +0000   Fri, 06 Oct 2023 02:51:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Oct 2023 02:53:42 +0000   Fri, 06 Oct 2023 02:51:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Oct 2023 02:53:42 +0000   Fri, 06 Oct 2023 02:51:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Oct 2023 02:53:42 +0000   Fri, 06 Oct 2023 02:52:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-647181
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4dd99cc55f149138895638849151b21
	  System UUID:                688f5c64-ef7e-47ff-87cf-6335f6e0d45a
	  Boot ID:                    d6810820-8fb1-4098-8489-41f3441712b9
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-qvjlc                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m5s
	  kube-system                 etcd-pause-647181                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m17s
	  kube-system                 kindnet-5zz7b                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m5s
	  kube-system                 kube-apiserver-pause-647181             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 kube-controller-manager-pause-647181    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 kube-proxy-9vvq2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-scheduler-pause-647181             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m1s                   kube-proxy       
	  Normal  Starting                 20s                    kube-proxy       
	  Normal  Starting                 47s                    kube-proxy       
	  Normal  Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m27s (x8 over 2m28s)  kubelet          Node pause-647181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m27s (x8 over 2m28s)  kubelet          Node pause-647181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m27s (x8 over 2m28s)  kubelet          Node pause-647181 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m17s                  kubelet          Node pause-647181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s                  kubelet          Node pause-647181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s                  kubelet          Node pause-647181 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m6s                   node-controller  Node pause-647181 event: Registered Node pause-647181 in Controller
	  Normal  NodeReady                93s                    kubelet          Node pause-647181 status is now: NodeReady
	  Normal  Starting                 30s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x8 over 30s)      kubelet          Node pause-647181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 30s)      kubelet          Node pause-647181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x8 over 30s)      kubelet          Node pause-647181 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10s                    node-controller  Node pause-647181 event: Registered Node pause-647181 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001054] FS-Cache: O-key=[8] '276a3b0000000000'
	[  +0.000712] FS-Cache: N-cookie c=0000009c [p=00000093 fl=2 nc=0 na=1]
	[  +0.000923] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000ea8162ba
	[  +0.001002] FS-Cache: N-key=[8] '276a3b0000000000'
	[  +0.002663] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=00000096 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000920] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000fc3db6f4
	[  +0.000983] FS-Cache: O-key=[8] '276a3b0000000000'
	[  +0.000674] FS-Cache: N-cookie c=0000009d [p=00000093 fl=2 nc=0 na=1]
	[  +0.000946] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000b9bd865e
	[  +0.000999] FS-Cache: N-key=[8] '276a3b0000000000'
	[  +2.732427] FS-Cache: Duplicate cookie detected
	[  +0.000701] FS-Cache: O-cookie c=00000094 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=00000000d93f34d8
	[  +0.000995] FS-Cache: O-key=[8] '266a3b0000000000'
	[  +0.000657] FS-Cache: N-cookie c=0000009f [p=00000093 fl=2 nc=0 na=1]
	[  +0.000872] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=000000000c4a9176
	[  +0.000974] FS-Cache: N-key=[8] '266a3b0000000000'
	[  +0.306196] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=00000099 [p=00000093 fl=226 nc=0 na=1]
	[  +0.000922] FS-Cache: O-cookie d=00000000b75c0848{9p.inode} n=0000000019ad38e0
	[  +0.001027] FS-Cache: O-key=[8] '2e6a3b0000000000'
	[  +0.000669] FS-Cache: N-cookie c=000000a0 [p=00000093 fl=2 nc=0 na=1]
	[  +0.000879] FS-Cache: N-cookie d=00000000b75c0848{9p.inode} n=00000000ea8162ba
	[  +0.000981] FS-Cache: N-key=[8] '2e6a3b0000000000'
	
	* 
	* ==> etcd [5d6de69c7f93e007181208271bac8495ce391ad74118e1200bb41d1979af4135] <==
	* {"level":"info","ts":"2023-10-06T02:53:36.418493Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-06T02:53:36.423418Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-06T02:53:36.423225Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-06T02:53:36.424046Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-06T02:53:36.423967Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-06T02:53:38.043117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:38.043227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:38.043269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:38.04333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-10-06T02:53:38.043371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-06T02:53:38.043412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-10-06T02:53:38.043444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-06T02:53:38.047328Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-647181 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-06T02:53:38.047852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-06T02:53:38.048915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-06T02:53:38.055226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-06T02:53:38.055253Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-06T02:53:38.066883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-06T02:53:38.095969Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-06T02:53:41.974846Z","caller":"traceutil/trace.go:171","msg":"trace[1316894804] transaction","detail":"{read_only:false; number_of_response:0; response_revision:471; }","duration":"115.183197ms","start":"2023-10-06T02:53:41.859647Z","end":"2023-10-06T02:53:41.97483Z","steps":["trace[1316894804] 'process raft request'  (duration: 41.61226ms)","trace[1316894804] 'compare'  (duration: 73.497353ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-06T02:53:41.975284Z","caller":"traceutil/trace.go:171","msg":"trace[848629592] linearizableReadLoop","detail":"{readStateIndex:496; appliedIndex:495; }","duration":"115.534723ms","start":"2023-10-06T02:53:41.859739Z","end":"2023-10-06T02:53:41.975274Z","steps":["trace[848629592] 'read index received'  (duration: 41.59763ms)","trace[848629592] 'applied index is now lower than readState.Index'  (duration: 73.936018ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-06T02:53:41.975399Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.659318ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/pause-647181\" ","response":"range_response_count:1 size:676"}
	{"level":"info","ts":"2023-10-06T02:53:41.975456Z","caller":"traceutil/trace.go:171","msg":"trace[2072830908] range","detail":"{range_begin:/registry/csinodes/pause-647181; range_end:; response_count:1; response_revision:471; }","duration":"115.730129ms","start":"2023-10-06T02:53:41.859719Z","end":"2023-10-06T02:53:41.975449Z","steps":["trace[2072830908] 'agreement among raft nodes before linearized reading'  (duration: 115.633414ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-06T02:53:41.978271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.056229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-06T02:53:41.978335Z","caller":"traceutil/trace.go:171","msg":"trace[962203764] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:0; response_revision:472; }","duration":"115.121568ms","start":"2023-10-06T02:53:41.863196Z","end":"2023-10-06T02:53:41.978318Z","steps":["trace[962203764] 'agreement among raft nodes before linearized reading'  (duration: 115.035183ms)"],"step_count":1}
	
	* 
	* ==> etcd [614ae4c2dae8d574f26260b3a7eafa44d81179dfec617cf466b75daffacc2b9a] <==
	* {"level":"info","ts":"2023-10-06T02:53:07.708387Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-06T02:53:08.691983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-06T02:53:08.692706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-06T02:53:08.693263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-10-06T02:53:08.701311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:08.701459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:08.701565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:08.701655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-06T02:53:08.718032Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-647181 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-06T02:53:08.718077Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-06T02:53:08.735438Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-06T02:53:08.736374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-06T02:53:08.745231Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-06T02:53:08.759099Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-06T02:53:08.759133Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-06T02:53:21.753573Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-06T02:53:21.753633Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-647181","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2023-10-06T02:53:21.753708Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-06T02:53:21.753778Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-06T02:53:21.804737Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-06T02:53:21.804881Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-06T02:53:21.804954Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-10-06T02:53:21.807626Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-06T02:53:21.807868Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-06T02:53:21.80791Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-647181","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  02:54:04 up 12:36,  0 users,  load average: 4.67, 4.27, 2.94
	Linux pause-647181 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1f7d2ed33dd3634ef3dd4898e89ea5b1a30b16dde68873ab0ec0afdcba6d3efd] <==
	* I1006 02:52:59.725430       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1006 02:52:59.725490       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1006 02:52:59.725631       1 main.go:116] setting mtu 1500 for CNI 
	I1006 02:52:59.725641       1 main.go:146] kindnetd IP family: "ipv4"
	I1006 02:52:59.725652       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1006 02:53:10.035694       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I1006 02:53:14.807888       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1006 02:53:14.811104       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [a75615795b0c938988076ada65f9dde5c43bd58a436d4cdfb9e73713f8b80405] <==
	* I1006 02:53:42.626310       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1006 02:53:42.626372       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1006 02:53:42.626506       1 main.go:116] setting mtu 1500 for CNI 
	I1006 02:53:42.626519       1 main.go:146] kindnetd IP family: "ipv4"
	I1006 02:53:42.626530       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1006 02:53:42.972868       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1006 02:53:42.972916       1 main.go:227] handling current node
	I1006 02:53:53.040195       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1006 02:53:53.040421       1 main.go:227] handling current node
	I1006 02:54:03.059364       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1006 02:54:03.062728       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [4e835c4e54a761cd84f42ccbf369d2132f54b37768f4620253facaa380cca10e] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 02:53:31.947201       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 02:53:31.991893       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1006 02:53:32.123697       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [65455e488da1a2ee47fc227e175d5f533b87130a3128f5bccecf01d6fce2062b] <==
	* I1006 02:53:41.593386       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1006 02:53:41.593418       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1006 02:53:41.593504       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1006 02:53:41.594673       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1006 02:53:41.594696       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1006 02:53:41.782079       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1006 02:53:41.782165       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1006 02:53:41.798950       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1006 02:53:41.809398       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1006 02:53:41.818064       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1006 02:53:41.818119       1 shared_informer.go:318] Caches are synced for configmaps
	I1006 02:53:41.818155       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1006 02:53:41.823551       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1006 02:53:41.823972       1 aggregator.go:166] initial CRD sync complete...
	I1006 02:53:41.823992       1 autoregister_controller.go:141] Starting autoregister controller
	I1006 02:53:41.823998       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 02:53:41.824005       1 cache.go:39] Caches are synced for autoregister controller
	I1006 02:53:41.859209       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1006 02:53:41.984377       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1006 02:53:42.648994       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1006 02:53:45.249216       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1006 02:53:45.403521       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1006 02:53:45.414858       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1006 02:53:45.478648       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 02:53:45.486365       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [1f627028913f83193d21e2ba9b23de5addd0af3f499f32d0a5491aa797f9b938] <==
	* I1006 02:53:10.540165       1 serving.go:348] Generated self-signed cert in-memory
	I1006 02:53:15.459670       1 controllermanager.go:189] "Starting" version="v1.28.2"
	I1006 02:53:15.459708       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 02:53:15.463947       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I1006 02:53:15.465102       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1006 02:53:15.465215       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1006 02:53:15.503352       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [49ce28bf9b8d6d2e09af0cdf6857f5f6a87dd44d509b54fa9c94848af062a41f] <==
	* I1006 02:53:54.653893       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1006 02:53:54.653942       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1006 02:53:54.653971       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1006 02:53:54.654114       1 shared_informer.go:318] Caches are synced for stateful set
	I1006 02:53:54.657742       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1006 02:53:54.657877       1 shared_informer.go:318] Caches are synced for PV protection
	I1006 02:53:54.659401       1 shared_informer.go:318] Caches are synced for persistent volume
	I1006 02:53:54.666824       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1006 02:53:54.666975       1 shared_informer.go:318] Caches are synced for crt configmap
	I1006 02:53:54.670690       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1006 02:53:54.670835       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1006 02:53:54.670958       1 shared_informer.go:318] Caches are synced for namespace
	I1006 02:53:54.679166       1 shared_informer.go:318] Caches are synced for ephemeral
	I1006 02:53:54.679255       1 shared_informer.go:318] Caches are synced for service account
	I1006 02:53:54.685138       1 shared_informer.go:318] Caches are synced for HPA
	I1006 02:53:54.685239       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1006 02:53:54.687555       1 shared_informer.go:318] Caches are synced for GC
	I1006 02:53:54.694745       1 shared_informer.go:318] Caches are synced for attach detach
	I1006 02:53:54.704189       1 shared_informer.go:318] Caches are synced for disruption
	I1006 02:53:54.717134       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1006 02:53:54.740457       1 shared_informer.go:318] Caches are synced for resource quota
	I1006 02:53:54.746984       1 shared_informer.go:318] Caches are synced for resource quota
	I1006 02:53:55.144570       1 shared_informer.go:318] Caches are synced for garbage collector
	I1006 02:53:55.204396       1 shared_informer.go:318] Caches are synced for garbage collector
	I1006 02:53:55.204435       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [0651a743420f5675b7b2f78756250ac69a5295917655f1ef7455ce8e1c136edd] <==
	* I1006 02:53:42.987863       1 server_others.go:69] "Using iptables proxy"
	I1006 02:53:43.316170       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1006 02:53:43.545915       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 02:53:43.575943       1 server_others.go:152] "Using iptables Proxier"
	I1006 02:53:43.577975       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1006 02:53:43.577997       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1006 02:53:43.578058       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1006 02:53:43.578653       1 server.go:846] "Version info" version="v1.28.2"
	I1006 02:53:43.578670       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 02:53:43.600110       1 config.go:188] "Starting service config controller"
	I1006 02:53:43.607473       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1006 02:53:43.606559       1 config.go:97] "Starting endpoint slice config controller"
	I1006 02:53:43.607506       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1006 02:53:43.607255       1 config.go:315] "Starting node config controller"
	I1006 02:53:43.607513       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1006 02:53:43.711404       1 shared_informer.go:318] Caches are synced for node config
	I1006 02:53:43.711443       1 shared_informer.go:318] Caches are synced for service config
	I1006 02:53:43.711455       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [0cf5928c77815df3b44ae491c77e3730749b2dc1ec4b9c6754f76e504722f1c4] <==
	* I1006 02:52:57.915372       1 server_others.go:69] "Using iptables proxy"
	E1006 02:53:07.919374       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-647181": net/http: TLS handshake timeout
	I1006 02:53:14.811482       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1006 02:53:16.335454       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 02:53:16.338814       1 server_others.go:152] "Using iptables Proxier"
	I1006 02:53:16.338917       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1006 02:53:16.338948       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1006 02:53:16.339024       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1006 02:53:16.339287       1 server.go:846] "Version info" version="v1.28.2"
	I1006 02:53:16.341341       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 02:53:16.342244       1 config.go:188] "Starting service config controller"
	I1006 02:53:16.342371       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1006 02:53:16.342434       1 config.go:97] "Starting endpoint slice config controller"
	I1006 02:53:16.342463       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1006 02:53:16.344806       1 config.go:315] "Starting node config controller"
	I1006 02:53:16.344885       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1006 02:53:16.442805       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1006 02:53:16.442864       1 shared_informer.go:318] Caches are synced for service config
	I1006 02:53:16.445411       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [0c7033aa5193617c5b60e134c5208cc49f7bbfe902e6cb898289396e7e099d97] <==
	* I1006 02:53:12.622341       1 serving.go:348] Generated self-signed cert in-memory
	I1006 02:53:16.323657       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1006 02:53:16.323683       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1006 02:53:16.325109       1 secure_serving.go:111] Initial population of client CA failed: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": context canceled
	I1006 02:53:16.325906       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	E1006 02:53:16.325962       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I1006 02:53:16.326058       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1006 02:53:16.326080       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	E1006 02:53:16.326088       1 shared_informer.go:314] unable to sync caches for RequestHeaderAuthRequestController
	I1006 02:53:16.326093       1 requestheader_controller.go:176] Shutting down RequestHeaderAuthRequestController
	I1006 02:53:16.326105       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1006 02:53:16.326116       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1006 02:53:16.326182       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E1006 02:53:16.326561       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [d5d42bb4b86d0dc14c954c1197facb1798c3a7771ac728c7c1491d2a41a018a3] <==
	* I1006 02:53:41.102376       1 serving.go:348] Generated self-signed cert in-memory
	I1006 02:53:42.671221       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1006 02:53:42.681809       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 02:53:42.724331       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1006 02:53:42.724379       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1006 02:53:42.724444       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 02:53:42.724457       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1006 02:53:42.724479       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 02:53:42.724495       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1006 02:53:42.731657       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1006 02:53:42.731769       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1006 02:53:42.828998       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1006 02:53:42.829830       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1006 02:53:42.829856       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 06 02:53:35 pause-647181 kubelet[3727]: W1006 02:53:35.916130    3727 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 06 02:53:35 pause-647181 kubelet[3727]: E1006 02:53:35.916225    3727 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 06 02:53:35 pause-647181 kubelet[3727]: W1006 02:53:35.919694    3727 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 06 02:53:35 pause-647181 kubelet[3727]: E1006 02:53:35.919757    3727 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 06 02:53:36 pause-647181 kubelet[3727]: W1006 02:53:36.065740    3727 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-647181&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 06 02:53:36 pause-647181 kubelet[3727]: E1006 02:53:36.065803    3727 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-647181&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 06 02:53:36 pause-647181 kubelet[3727]: I1006 02:53:36.406530    3727 kubelet_node_status.go:70] "Attempting to register node" node="pause-647181"
	Oct 06 02:53:41 pause-647181 kubelet[3727]: I1006 02:53:41.866794    3727 apiserver.go:52] "Watching apiserver"
	Oct 06 02:53:41 pause-647181 kubelet[3727]: I1006 02:53:41.873420    3727 topology_manager.go:215] "Topology Admit Handler" podUID="eec193c3-c96f-43e8-a0c3-e0964c0c7b51" podNamespace="kube-system" podName="kube-proxy-9vvq2"
	Oct 06 02:53:41 pause-647181 kubelet[3727]: I1006 02:53:41.873542    3727 topology_manager.go:215] "Topology Admit Handler" podUID="5582861e-0771-4be9-85fa-3194c946e4bc" podNamespace="kube-system" podName="coredns-5dd5756b68-qvjlc"
	Oct 06 02:53:41 pause-647181 kubelet[3727]: I1006 02:53:41.873614    3727 topology_manager.go:215] "Topology Admit Handler" podUID="006acf6b-70f8-4596-8965-0f13beb4fff6" podNamespace="kube-system" podName="kindnet-5zz7b"
	Oct 06 02:53:41 pause-647181 kubelet[3727]: I1006 02:53:41.974182    3727 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.000575    3727 kubelet_node_status.go:108] "Node was previously registered" node="pause-647181"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.000950    3727 kubelet_node_status.go:73] "Successfully registered node" node="pause-647181"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.006668    3727 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.007625    3727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.068412    3727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/006acf6b-70f8-4596-8965-0f13beb4fff6-cni-cfg\") pod \"kindnet-5zz7b\" (UID: \"006acf6b-70f8-4596-8965-0f13beb4fff6\") " pod="kube-system/kindnet-5zz7b"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.068490    3727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eec193c3-c96f-43e8-a0c3-e0964c0c7b51-xtables-lock\") pod \"kube-proxy-9vvq2\" (UID: \"eec193c3-c96f-43e8-a0c3-e0964c0c7b51\") " pod="kube-system/kube-proxy-9vvq2"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.068522    3727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/006acf6b-70f8-4596-8965-0f13beb4fff6-xtables-lock\") pod \"kindnet-5zz7b\" (UID: \"006acf6b-70f8-4596-8965-0f13beb4fff6\") " pod="kube-system/kindnet-5zz7b"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.068593    3727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/006acf6b-70f8-4596-8965-0f13beb4fff6-lib-modules\") pod \"kindnet-5zz7b\" (UID: \"006acf6b-70f8-4596-8965-0f13beb4fff6\") " pod="kube-system/kindnet-5zz7b"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.068630    3727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eec193c3-c96f-43e8-a0c3-e0964c0c7b51-lib-modules\") pod \"kube-proxy-9vvq2\" (UID: \"eec193c3-c96f-43e8-a0c3-e0964c0c7b51\") " pod="kube-system/kube-proxy-9vvq2"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.174651    3727 scope.go:117] "RemoveContainer" containerID="1f7d2ed33dd3634ef3dd4898e89ea5b1a30b16dde68873ab0ec0afdcba6d3efd"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.175544    3727 scope.go:117] "RemoveContainer" containerID="0cf5928c77815df3b44ae491c77e3730749b2dc1ec4b9c6754f76e504722f1c4"
	Oct 06 02:53:42 pause-647181 kubelet[3727]: I1006 02:53:42.176785    3727 scope.go:117] "RemoveContainer" containerID="5481ab047c1f37fc36bbac6bffa2691ed4776094a359b4a13151405f76d97cb7"
	Oct 06 02:53:49 pause-647181 kubelet[3727]: I1006 02:53:49.385808    3727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-647181 -n pause-647181
helpers_test.go:261: (dbg) Run:  kubectl --context pause-647181 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (90.46s)

                                                
                                    

Test pass (266/302)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.71
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.15
10 TestDownloadOnly/v1.28.2/json-events 13.93
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
19 TestBinaryMirror 0.64
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
25 TestAddons/Setup 183.17
27 TestAddons/parallel/Registry 15.91
29 TestAddons/parallel/InspektorGadget 10.89
30 TestAddons/parallel/MetricsServer 5.89
33 TestAddons/parallel/CSI 73.02
34 TestAddons/parallel/Headlamp 12.1
35 TestAddons/parallel/CloudSpanner 5.62
36 TestAddons/parallel/LocalPath 9.4
37 TestAddons/parallel/NvidiaDevicePlugin 5.58
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/StoppedEnableDisable 12.43
42 TestCertOptions 34.46
43 TestCertExpiration 257.79
45 TestForceSystemdFlag 38.27
46 TestForceSystemdEnv 39.47
52 TestErrorSpam/setup 32.78
53 TestErrorSpam/start 0.94
54 TestErrorSpam/status 1.16
55 TestErrorSpam/pause 1.82
56 TestErrorSpam/unpause 1.99
57 TestErrorSpam/stop 1.51
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 49.94
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 41.01
64 TestFunctional/serial/KubeContext 0.06
65 TestFunctional/serial/KubectlGetPods 0.1
68 TestFunctional/serial/CacheCmd/cache/add_remote 4.08
69 TestFunctional/serial/CacheCmd/cache/add_local 1.14
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
74 TestFunctional/serial/CacheCmd/cache/delete 0.17
75 TestFunctional/serial/MinikubeKubectlCmd 0.16
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
77 TestFunctional/serial/ExtraConfig 36.34
78 TestFunctional/serial/ComponentHealth 0.1
79 TestFunctional/serial/LogsCmd 1.87
80 TestFunctional/serial/LogsFileCmd 1.86
81 TestFunctional/serial/InvalidService 4.85
83 TestFunctional/parallel/ConfigCmd 0.67
84 TestFunctional/parallel/DashboardCmd 11.33
85 TestFunctional/parallel/DryRun 0.59
86 TestFunctional/parallel/InternationalLanguage 0.23
87 TestFunctional/parallel/StatusCmd 1.16
91 TestFunctional/parallel/ServiceCmdConnect 12.76
92 TestFunctional/parallel/AddonsCmd 0.26
93 TestFunctional/parallel/PersistentVolumeClaim 25.41
95 TestFunctional/parallel/SSHCmd 0.85
96 TestFunctional/parallel/CpCmd 1.58
98 TestFunctional/parallel/FileSync 0.45
99 TestFunctional/parallel/CertSync 2.33
103 TestFunctional/parallel/NodeLabels 0.1
105 TestFunctional/parallel/NonActiveRuntimeDisabled 1.06
107 TestFunctional/parallel/License 0.35
109 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
110 TestFunctional/parallel/Version/short 0.1
111 TestFunctional/parallel/Version/components 0.98
112 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.62
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
119 TestFunctional/parallel/ImageCommands/ImageBuild 5.27
120 TestFunctional/parallel/ImageCommands/Setup 2.56
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.19
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.98
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.38
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
133 TestFunctional/parallel/MountCmd/any-port 8.61
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.18
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.71
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.35
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.09
138 TestFunctional/parallel/MountCmd/specific-port 2.3
139 TestFunctional/parallel/MountCmd/VerifyCleanup 2.65
140 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
142 TestFunctional/parallel/ProfileCmd/profile_list 0.49
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
144 TestFunctional/parallel/ServiceCmd/List 0.63
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.69
147 TestFunctional/parallel/ServiceCmd/Format 0.5
148 TestFunctional/parallel/ServiceCmd/URL 0.46
149 TestFunctional/delete_addon-resizer_images 0.09
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 87.99
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.98
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.67
162 TestJSONOutput/start/Command 80.1
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.83
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.76
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.95
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.28
187 TestKicCustomNetwork/create_custom_network 42.06
188 TestKicCustomNetwork/use_default_bridge_network 34.45
189 TestKicExistingNetwork 37.65
190 TestKicCustomSubnet 35.87
191 TestKicStaticIP 40.72
192 TestMainNoArgs 0.07
193 TestMinikubeProfile 71.06
196 TestMountStart/serial/StartWithMountFirst 6.93
197 TestMountStart/serial/VerifyMountFirst 0.31
198 TestMountStart/serial/StartWithMountSecond 9.11
199 TestMountStart/serial/VerifyMountSecond 0.31
200 TestMountStart/serial/DeleteFirst 1.69
201 TestMountStart/serial/VerifyMountPostDelete 0.3
202 TestMountStart/serial/Stop 1.24
203 TestMountStart/serial/RestartStopped 8.02
204 TestMountStart/serial/VerifyMountPostStop 0.29
207 TestMultiNode/serial/FreshStart2Nodes 127.95
208 TestMultiNode/serial/DeployApp2Nodes 7.03
210 TestMultiNode/serial/AddNode 50.07
211 TestMultiNode/serial/ProfileList 0.36
212 TestMultiNode/serial/CopyFile 11.45
213 TestMultiNode/serial/StopNode 2.43
214 TestMultiNode/serial/StartAfterStop 12.85
215 TestMultiNode/serial/RestartKeepsNodes 124.23
216 TestMultiNode/serial/DeleteNode 5.16
217 TestMultiNode/serial/StopMultiNode 24.16
218 TestMultiNode/serial/RestartMultiNode 80.12
219 TestMultiNode/serial/ValidateNameConflict 38.34
224 TestPreload 169.71
226 TestScheduledStopUnix 110.8
229 TestInsufficientStorage 10.77
232 TestKubernetesUpgrade 168.82
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
236 TestNoKubernetes/serial/StartWithK8s 47.52
237 TestNoKubernetes/serial/StartWithStopK8s 8.41
238 TestNoKubernetes/serial/Start 10.18
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
240 TestNoKubernetes/serial/ProfileList 1.12
241 TestNoKubernetes/serial/Stop 1.32
242 TestNoKubernetes/serial/StartNoArgs 7.65
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.47
244 TestStoppedBinaryUpgrade/Setup 1.04
246 TestStoppedBinaryUpgrade/MinikubeLogs 0.68
255 TestPause/serial/Start 86.38
264 TestNetworkPlugins/group/false 4.23
269 TestStartStop/group/old-k8s-version/serial/FirstStart 124.09
270 TestStartStop/group/old-k8s-version/serial/DeployApp 10.55
271 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.04
272 TestStartStop/group/old-k8s-version/serial/Stop 12.21
273 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
274 TestStartStop/group/old-k8s-version/serial/SecondStart 454.52
276 TestStartStop/group/no-preload/serial/FirstStart 66.22
277 TestStartStop/group/no-preload/serial/DeployApp 9.48
278 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
279 TestStartStop/group/no-preload/serial/Stop 12.13
280 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
281 TestStartStop/group/no-preload/serial/SecondStart 347.56
282 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
283 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
284 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
285 TestStartStop/group/old-k8s-version/serial/Pause 4.02
287 TestStartStop/group/embed-certs/serial/FirstStart 84.92
288 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.03
289 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.17
290 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.52
291 TestStartStop/group/no-preload/serial/Pause 4.66
293 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.86
294 TestStartStop/group/embed-certs/serial/DeployApp 8.61
295 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.58
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.21
297 TestStartStop/group/embed-certs/serial/Stop 12.19
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.25
299 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
301 TestStartStop/group/embed-certs/serial/SecondStart 629.82
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
303 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 354.5
304 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 18.08
305 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
306 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.37
307 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.51
309 TestStartStop/group/newest-cni/serial/FirstStart 44.41
310 TestStartStop/group/newest-cni/serial/DeployApp 0
311 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.15
312 TestStartStop/group/newest-cni/serial/Stop 1.28
313 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
314 TestStartStop/group/newest-cni/serial/SecondStart 32.23
315 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
317 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
318 TestStartStop/group/newest-cni/serial/Pause 3.41
319 TestNetworkPlugins/group/auto/Start 81.68
320 TestNetworkPlugins/group/auto/KubeletFlags 0.34
321 TestNetworkPlugins/group/auto/NetCatPod 11.39
322 TestNetworkPlugins/group/auto/DNS 0.23
323 TestNetworkPlugins/group/auto/Localhost 0.18
324 TestNetworkPlugins/group/auto/HairPin 0.21
325 TestNetworkPlugins/group/kindnet/Start 80.82
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.39
329 TestStartStop/group/embed-certs/serial/Pause 3.63
330 TestNetworkPlugins/group/calico/Start 72.01
331 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
332 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
333 TestNetworkPlugins/group/kindnet/NetCatPod 13.45
334 TestNetworkPlugins/group/kindnet/DNS 0.27
335 TestNetworkPlugins/group/kindnet/Localhost 0.23
336 TestNetworkPlugins/group/kindnet/HairPin 0.22
337 TestNetworkPlugins/group/custom-flannel/Start 66.8
338 TestNetworkPlugins/group/calico/ControllerPod 5.06
339 TestNetworkPlugins/group/calico/KubeletFlags 0.49
340 TestNetworkPlugins/group/calico/NetCatPod 13.57
341 TestNetworkPlugins/group/calico/DNS 0.34
342 TestNetworkPlugins/group/calico/Localhost 0.24
343 TestNetworkPlugins/group/calico/HairPin 0.24
344 TestNetworkPlugins/group/enable-default-cni/Start 48.89
345 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.46
346 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.45
347 TestNetworkPlugins/group/custom-flannel/DNS 0.33
348 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
349 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
350 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.47
351 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.56
352 TestNetworkPlugins/group/flannel/Start 67.68
353 TestNetworkPlugins/group/enable-default-cni/DNS 26.51
354 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
355 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
356 TestNetworkPlugins/group/bridge/Start 79.16
357 TestNetworkPlugins/group/flannel/ControllerPod 5.05
358 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
359 TestNetworkPlugins/group/flannel/NetCatPod 12.41
360 TestNetworkPlugins/group/flannel/DNS 0.24
361 TestNetworkPlugins/group/flannel/Localhost 0.24
362 TestNetworkPlugins/group/flannel/HairPin 0.25
363 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
364 TestNetworkPlugins/group/bridge/NetCatPod 11.32
365 TestNetworkPlugins/group/bridge/DNS 0.22
366 TestNetworkPlugins/group/bridge/Localhost 0.2
367 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (10.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-310473 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-310473 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.714170152s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-310473
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-310473: exit status 85 (146.814438ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-310473 | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC |          |
	|         | -p download-only-310473        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 02:11:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 02:11:03.990553 2268311 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:11:03.990794 2268311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:11:03.990832 2268311 out.go:309] Setting ErrFile to fd 2...
	I1006 02:11:03.990864 2268311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:11:03.991189 2268311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	W1006 02:11:03.991383 2268311 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17314-2262959/.minikube/config/config.json: open /home/jenkins/minikube-integration/17314-2262959/.minikube/config/config.json: no such file or directory
	I1006 02:11:03.991911 2268311 out.go:303] Setting JSON to true
	I1006 02:11:03.992975 2268311 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":42810,"bootTime":1696515454,"procs":286,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:11:03.993085 2268311 start.go:138] virtualization:  
	I1006 02:11:03.996315 2268311 out.go:97] [download-only-310473] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	W1006 02:11:03.996574 2268311 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball: no such file or directory
	I1006 02:11:03.996711 2268311 notify.go:220] Checking for updates...
	I1006 02:11:04.001754 2268311 out.go:169] MINIKUBE_LOCATION=17314
	I1006 02:11:04.004331 2268311 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:11:04.006501 2268311 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:11:04.009035 2268311 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:11:04.011149 2268311 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1006 02:11:04.014938 2268311 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1006 02:11:04.015195 2268311 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:11:04.040182 2268311 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:11:04.040275 2268311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:11:04.116093 2268311 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-10-06 02:11:04.105096636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:11:04.116206 2268311 docker.go:295] overlay module found
	I1006 02:11:04.119006 2268311 out.go:97] Using the docker driver based on user configuration
	I1006 02:11:04.119034 2268311 start.go:298] selected driver: docker
	I1006 02:11:04.119041 2268311 start.go:902] validating driver "docker" against <nil>
	I1006 02:11:04.119174 2268311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:11:04.188358 2268311 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-10-06 02:11:04.178736044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:11:04.188507 2268311 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1006 02:11:04.188767 2268311 start_flags.go:386] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1006 02:11:04.188925 2268311 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 02:11:04.191533 2268311 out.go:169] Using Docker driver with root privileges
	I1006 02:11:04.193410 2268311 cni.go:84] Creating CNI manager for ""
	I1006 02:11:04.193435 2268311 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:11:04.193450 2268311 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1006 02:11:04.193465 2268311 start_flags.go:323] config:
	{Name:download-only-310473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-310473 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:11:04.195425 2268311 out.go:97] Starting control plane node download-only-310473 in cluster download-only-310473
	I1006 02:11:04.195446 2268311 cache.go:122] Beginning downloading kic base image for docker with crio
	I1006 02:11:04.197463 2268311 out.go:97] Pulling base image ...
	I1006 02:11:04.197486 2268311 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1006 02:11:04.197612 2268311 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1006 02:11:04.214696 2268311 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1006 02:11:04.215464 2268311 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1006 02:11:04.215569 2268311 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1006 02:11:04.276377 2268311 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1006 02:11:04.276402 2268311 cache.go:57] Caching tarball of preloaded images
	I1006 02:11:04.277434 2268311 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1006 02:11:04.279652 2268311 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1006 02:11:04.279672 2268311 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1006 02:11:04.399529 2268311 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1006 02:11:08.772393 2268311 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1006 02:11:12.791447 2268311 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1006 02:11:12.791553 2268311 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1006 02:11:13.772903 2268311 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1006 02:11:13.773278 2268311 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/download-only-310473/config.json ...
	I1006 02:11:13.773309 2268311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/download-only-310473/config.json: {Name:mk55bcdb1ddd57bb8e62669c97dfba9930d10ee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 02:11:13.773516 2268311 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1006 02:11:13.774171 2268311 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-310473"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (13.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-310473 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-310473 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.929125998s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (13.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-310473
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-310473: exit status 85 (92.315791ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-310473 | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC |          |
	|         | -p download-only-310473        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-310473 | jenkins | v1.31.2 | 06 Oct 23 02:11 UTC |          |
	|         | -p download-only-310473        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 02:11:14
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 02:11:14.857144 2268388 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:11:14.857354 2268388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:11:14.857380 2268388 out.go:309] Setting ErrFile to fd 2...
	I1006 02:11:14.857400 2268388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:11:14.857722 2268388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	W1006 02:11:14.857921 2268388 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17314-2262959/.minikube/config/config.json: open /home/jenkins/minikube-integration/17314-2262959/.minikube/config/config.json: no such file or directory
	I1006 02:11:14.858216 2268388 out.go:303] Setting JSON to true
	I1006 02:11:14.859382 2268388 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":42821,"bootTime":1696515454,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:11:14.859482 2268388 start.go:138] virtualization:  
	I1006 02:11:14.869415 2268388 out.go:97] [download-only-310473] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1006 02:11:14.869797 2268388 notify.go:220] Checking for updates...
	I1006 02:11:14.879674 2268388 out.go:169] MINIKUBE_LOCATION=17314
	I1006 02:11:14.888713 2268388 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:11:14.907634 2268388 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:11:14.924188 2268388 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:11:14.947853 2268388 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1006 02:11:15.004753 2268388 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1006 02:11:15.005430 2268388 config.go:182] Loaded profile config "download-only-310473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1006 02:11:15.005549 2268388 start.go:810] api.Load failed for download-only-310473: filestore "download-only-310473": Docker machine "download-only-310473" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1006 02:11:15.005659 2268388 driver.go:378] Setting default libvirt URI to qemu:///system
	W1006 02:11:15.005693 2268388 start.go:810] api.Load failed for download-only-310473: filestore "download-only-310473": Docker machine "download-only-310473" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1006 02:11:15.032313 2268388 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:11:15.032396 2268388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:11:15.110452 2268388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-06 02:11:15.098874695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:11:15.110594 2268388 docker.go:295] overlay module found
	I1006 02:11:15.140816 2268388 out.go:97] Using the docker driver based on existing profile
	I1006 02:11:15.140879 2268388 start.go:298] selected driver: docker
	I1006 02:11:15.140887 2268388 start.go:902] validating driver "docker" against &{Name:download-only-310473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-310473 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:11:15.141077 2268388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:11:15.210680 2268388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-06 02:11:15.200851127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:11:15.211267 2268388 cni.go:84] Creating CNI manager for ""
	I1006 02:11:15.211286 2268388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1006 02:11:15.211304 2268388 start_flags.go:323] config:
	{Name:download-only-310473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-310473 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1006 02:11:15.243102 2268388 out.go:97] Starting control plane node download-only-310473 in cluster download-only-310473
	I1006 02:11:15.243137 2268388 cache.go:122] Beginning downloading kic base image for docker with crio
	I1006 02:11:15.273961 2268388 out.go:97] Pulling base image ...
	I1006 02:11:15.273998 2268388 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:11:15.274075 2268388 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1006 02:11:15.290878 2268388 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1006 02:11:15.291020 2268388 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1006 02:11:15.291066 2268388 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1006 02:11:15.291073 2268388 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1006 02:11:15.291081 2268388 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1006 02:11:15.347286 2268388 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1006 02:11:15.347315 2268388 cache.go:57] Caching tarball of preloaded images
	I1006 02:11:15.352207 2268388 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1006 02:11:15.365924 2268388 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1006 02:11:15.365961 2268388 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 ...
	I1006 02:11:15.478765 2268388 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:ec283948b04358f92432bdd325b7fb0b -> /home/jenkins/minikube-integration/17314-2262959/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-310473"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-310473
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-306268 --alsologtostderr --binary-mirror http://127.0.0.1:43951 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-306268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-306268
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-891734
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-891734: exit status 85 (89.398583ms)

                                                
                                                
-- stdout --
	* Profile "addons-891734" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-891734"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-891734
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-891734: exit status 85 (93.550515ms)

                                                
                                                
-- stdout --
	* Profile "addons-891734" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-891734"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (183.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-891734 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-891734 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m3.169613774s)
--- PASS: TestAddons/Setup (183.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 70.274417ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-x4gxk" [ea43b86b-677c-480a-9d39-06963a88c8e4] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.051778176s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jnd69" [bb78ebbe-5710-4e7f-830f-33db94c493b0] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.043434372s
addons_test.go:339: (dbg) Run:  kubectl --context addons-891734 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-891734 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-891734 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.543571727s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-891734 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-891734 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2fbv4" [b377ac46-9bbb-4a9e-ae28-f4d2ef6d6a24] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.016491147s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-891734
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-891734: (5.868166741s)
--- PASS: TestAddons/parallel/InspektorGadget (10.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 11.099028ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-v9pjf" [83769c9b-3320-4399-a5da-e3f6c6e53442] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01737928s
addons_test.go:414: (dbg) Run:  kubectl --context addons-891734 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-891734 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (73.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 73.851807ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-891734 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/10/06 02:14:48 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-891734 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [451162e0-0787-4808-81d0-d3c1c6eafe11] Pending
helpers_test.go:344: "task-pv-pod" [451162e0-0787-4808-81d0-d3c1c6eafe11] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [451162e0-0787-4808-81d0-d3c1c6eafe11] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.013818132s
addons_test.go:583: (dbg) Run:  kubectl --context addons-891734 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-891734 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-891734 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-891734 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-891734 delete pod task-pv-pod: (1.067066515s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-891734 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-891734 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-891734 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [38ae306d-b59f-472a-b701-299ef213b3d8] Pending
helpers_test.go:344: "task-pv-pod-restore" [38ae306d-b59f-472a-b701-299ef213b3d8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [38ae306d-b59f-472a-b701-299ef213b3d8] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.020952414s
addons_test.go:625: (dbg) Run:  kubectl --context addons-891734 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-891734 delete pod task-pv-pod-restore: (1.064206075s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-891734 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-891734 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-891734 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-891734 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.838220328s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-891734 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (73.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-891734 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-891734 --alsologtostderr -v=1: (1.077564043s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-8mm4d" [62f5d47f-b750-4543-9a63-d570eb3f2862] Pending
helpers_test.go:344: "headlamp-58b88cff49-8mm4d" [62f5d47f-b750-4543-9a63-d570eb3f2862] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-8mm4d" [62f5d47f-b750-4543-9a63-d570eb3f2862] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.026034567s
--- PASS: TestAddons/parallel/Headlamp (12.10s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-d5s9m" [60894e2d-cbaa-4cf1-88df-9cb0ae07f2b4] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014950929s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-891734
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.4s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-891734 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-891734 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891734 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f7a7fc5b-5a73-4ca9-a35b-7f8b68d0ad4f] Pending
helpers_test.go:344: "test-local-path" [f7a7fc5b-5a73-4ca9-a35b-7f8b68d0ad4f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f7a7fc5b-5a73-4ca9-a35b-7f8b68d0ad4f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f7a7fc5b-5a73-4ca9-a35b-7f8b68d0ad4f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00951779s
addons_test.go:890: (dbg) Run:  kubectl --context addons-891734 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-891734 ssh "cat /opt/local-path-provisioner/pvc-c94cf330-8988-48fe-a88a-dba4db821981_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-891734 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-891734 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-891734 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.40s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2pwfm" [e2bb64d9-423f-4701-af20-ede29bdaf239] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.017802053s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-891734
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-891734 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-891734 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-891734
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-891734: (12.085214426s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-891734
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-891734
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-891734
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestCertOptions (34.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-348254 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-348254 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.663801919s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-348254 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-348254 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-348254 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-348254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-348254
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-348254: (2.036388451s)
--- PASS: TestCertOptions (34.46s)

                                                
                                    
x
+
TestCertExpiration (257.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-885413 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-885413 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.927779066s)
E1006 02:54:27.707558 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:54:33.943418 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-885413 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-885413 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (35.94214571s)
helpers_test.go:175: Cleaning up "cert-expiration-885413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-885413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-885413: (2.923425723s)
--- PASS: TestCertExpiration (257.79s)

                                                
                                    
x
+
TestForceSystemdFlag (38.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-377981 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-377981 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.589105738s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-377981 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-377981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-377981
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-377981: (2.346799533s)
--- PASS: TestForceSystemdFlag (38.27s)

                                                
                                    
x
+
TestForceSystemdEnv (39.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-836004 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-836004 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.857143538s)
helpers_test.go:175: Cleaning up "force-systemd-env-836004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-836004
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-836004: (2.610642033s)
--- PASS: TestForceSystemdEnv (39.47s)

                                                
                                    
x
+
TestErrorSpam/setup (32.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-604471 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-604471 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-604471 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-604471 --driver=docker  --container-runtime=crio: (32.780010481s)
--- PASS: TestErrorSpam/setup (32.78s)

                                                
                                    
x
+
TestErrorSpam/start (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 start --dry-run
--- PASS: TestErrorSpam/start (0.94s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 pause
--- PASS: TestErrorSpam/pause (1.82s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 unpause
--- PASS: TestErrorSpam/unpause (1.99s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 stop: (1.254078438s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-604471 --log_dir /tmp/nospam-604471 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17314-2262959/.minikube/files/etc/test/nested/copy/2268306/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-642904 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1006 02:19:33.943015 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:19:33.948834 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:19:33.959151 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:19:33.979450 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:19:34.019729 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:19:34.100081 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:19:34.260458 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:19:34.580977 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:19:35.221819 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:19:36.502152 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:19:39.063149 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:19:44.184047 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:19:54.424747 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-642904 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (49.937149532s)
--- PASS: TestFunctional/serial/StartWithProxy (49.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-642904 --alsologtostderr -v=8
E1006 02:20:14.905558 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-642904 --alsologtostderr -v=8: (41.00852464s)
functional_test.go:659: soft start took 41.009121671s for "functional-642904" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-642904 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 cache add registry.k8s.io/pause:3.1: (1.354456557s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 cache add registry.k8s.io/pause:3.3: (1.38616923s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 cache add registry.k8s.io/pause:latest: (1.336479749s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-642904 /tmp/TestFunctionalserialCacheCmdcacheadd_local1705265846/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 cache add minikube-local-cache-test:functional-642904
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 cache delete minikube-local-cache-test:functional-642904
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-642904
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642904 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (357.945749ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 cache reload: (1.006338669s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 kubectl -- --context functional-642904 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-642904 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.34s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-642904 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1006 02:20:55.866392 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-642904 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.337195187s)
functional_test.go:757: restart took 36.337301822s for "functional-642904" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.34s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-642904 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 logs: (1.871020122s)
--- PASS: TestFunctional/serial/LogsCmd (1.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 logs --file /tmp/TestFunctionalserialLogsFileCmd1869908105/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 logs --file /tmp/TestFunctionalserialLogsFileCmd1869908105/001/logs.txt: (1.856819677s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.85s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-642904 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-642904
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-642904: exit status 115 (593.953659ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30749 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-642904 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.85s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642904 config get cpus: exit status 14 (87.302125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642904 config get cpus: exit status 14 (165.408474ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-642904 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-642904 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2295711: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.33s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-642904 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-642904 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (223.71328ms)

                                                
                                                
-- stdout --
	* [functional-642904] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 02:22:30.565866 2295096 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:22:30.566008 2295096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:22:30.566017 2295096 out.go:309] Setting ErrFile to fd 2...
	I1006 02:22:30.566023 2295096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:22:30.566253 2295096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:22:30.566595 2295096 out.go:303] Setting JSON to false
	I1006 02:22:30.567613 2295096 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":43497,"bootTime":1696515454,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:22:30.567690 2295096 start.go:138] virtualization:  
	I1006 02:22:30.570059 2295096 out.go:177] * [functional-642904] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1006 02:22:30.572504 2295096 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 02:22:30.574354 2295096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:22:30.572663 2295096 notify.go:220] Checking for updates...
	I1006 02:22:30.576245 2295096 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:22:30.578285 2295096 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:22:30.580202 2295096 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 02:22:30.581933 2295096 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 02:22:30.584212 2295096 config.go:182] Loaded profile config "functional-642904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:22:30.584777 2295096 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:22:30.611907 2295096 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:22:30.612030 2295096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:22:30.701584 2295096 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-06 02:22:30.690376305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:22:30.701739 2295096 docker.go:295] overlay module found
	I1006 02:22:30.705238 2295096 out.go:177] * Using the docker driver based on existing profile
	I1006 02:22:30.707226 2295096 start.go:298] selected driver: docker
	I1006 02:22:30.707263 2295096 start.go:902] validating driver "docker" against &{Name:functional-642904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-642904 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:22:30.707356 2295096 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 02:22:30.709898 2295096 out.go:177] 
	W1006 02:22:30.711888 2295096 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1006 02:22:30.713923 2295096 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-642904 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-642904 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-642904 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (231.029055ms)

                                                
                                                
-- stdout --
	* [functional-642904] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 02:22:32.321281 2295382 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:22:32.321439 2295382 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:22:32.321447 2295382 out.go:309] Setting ErrFile to fd 2...
	I1006 02:22:32.321454 2295382 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:22:32.321795 2295382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:22:32.322185 2295382 out.go:303] Setting JSON to false
	I1006 02:22:32.324187 2295382 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":43499,"bootTime":1696515454,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:22:32.324277 2295382 start.go:138] virtualization:  
	I1006 02:22:32.327175 2295382 out.go:177] * [functional-642904] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I1006 02:22:32.330157 2295382 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 02:22:32.332020 2295382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:22:32.330362 2295382 notify.go:220] Checking for updates...
	I1006 02:22:32.336486 2295382 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:22:32.339138 2295382 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:22:32.341174 2295382 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 02:22:32.343360 2295382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 02:22:32.345899 2295382 config.go:182] Loaded profile config "functional-642904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:22:32.346590 2295382 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:22:32.373500 2295382 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:22:32.373604 2295382 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:22:32.458445 2295382 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-06 02:22:32.448235629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:22:32.458558 2295382 docker.go:295] overlay module found
	I1006 02:22:32.461185 2295382 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1006 02:22:32.463698 2295382 start.go:298] selected driver: docker
	I1006 02:22:32.463717 2295382 start.go:902] validating driver "docker" against &{Name:functional-642904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-642904 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 02:22:32.463822 2295382 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 02:22:32.466824 2295382 out.go:177] 
	W1006 02:22:32.468783 2295382 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1006 02:22:32.470847 2295382 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-642904 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-642904 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-6hwdc" [232bf88d-a6b0-46a8-82d6-f7a0762679ec] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-6hwdc" [232bf88d-a6b0-46a8-82d6-f7a0762679ec] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.026445959s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31850
functional_test.go:1674: http://192.168.49.2:31850: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-6hwdc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31850
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [de656da9-a313-4859-950c-35d0ed4893cb] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013952391s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-642904 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-642904 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-642904 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-642904 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [804fa082-4d79-45f8-b45b-7588ea67008e] Pending
helpers_test.go:344: "sp-pod" [804fa082-4d79-45f8-b45b-7588ea67008e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [804fa082-4d79-45f8-b45b-7588ea67008e] Running
E1006 02:22:17.787090 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.015535822s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-642904 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-642904 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-642904 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [73e5cc06-a772-467e-b95c-e4bcb125d612] Pending
helpers_test.go:344: "sp-pod" [73e5cc06-a772-467e-b95c-e4bcb125d612] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.024807361s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-642904 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh -n functional-642904 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 cp functional-642904:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2134012191/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh -n functional-642904 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2268306/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "sudo cat /etc/test/nested/copy/2268306/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2268306.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "sudo cat /etc/ssl/certs/2268306.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2268306.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "sudo cat /usr/share/ca-certificates/2268306.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/22683062.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "sudo cat /etc/ssl/certs/22683062.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/22683062.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "sudo cat /usr/share/ca-certificates/22683062.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-642904 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642904 ssh "sudo systemctl is-active docker": exit status 1 (464.991852ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642904 ssh "sudo systemctl is-active containerd": exit status 1 (594.579786ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-642904 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-642904 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-642904 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2291519: os: process already finished
helpers_test.go:502: unable to terminate pid 2291379: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-642904 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-642904 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-642904 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f91cac7a-870c-48ef-937a-48e2e6645e87] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f91cac7a-870c-48ef-937a-48e2e6645e87] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.029974457s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-642904 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-642904
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-642904 image ls --format short --alsologtostderr:
I1006 02:22:37.544187 2296100 out.go:296] Setting OutFile to fd 1 ...
I1006 02:22:37.544427 2296100 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 02:22:37.544457 2296100 out.go:309] Setting ErrFile to fd 2...
I1006 02:22:37.544477 2296100 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 02:22:37.544771 2296100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
I1006 02:22:37.545455 2296100 config.go:182] Loaded profile config "functional-642904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1006 02:22:37.545686 2296100 config.go:182] Loaded profile config "functional-642904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1006 02:22:37.546289 2296100 cli_runner.go:164] Run: docker container inspect functional-642904 --format={{.State.Status}}
I1006 02:22:37.572729 2296100 ssh_runner.go:195] Run: systemctl --version
I1006 02:22:37.572785 2296100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642904
I1006 02:22:37.601950 2296100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/functional-642904/id_rsa Username:docker}
I1006 02:22:37.700885 2296100 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image ls --format table --alsologtostderr
2023/10/06 02:22:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-642904 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 2a4fbb36e9660 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.28.2            | 30bb499447fe1 | 121MB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/my-image                      | functional-642904  | 19b4adf6cb3f9 | 1.64MB |
| registry.k8s.io/kube-scheduler          | v1.28.2            | 64fc40cee3716 | 59.2MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/google-containers/addon-resizer  | functional-642904  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-controller-manager | v1.28.2            | 89d57b83c1786 | 117MB  |
| docker.io/library/nginx                 | alpine             | df8fd1ca35d66 | 45.3MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-proxy              | v1.28.2            | 7da62c127fc0f | 69.9MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-642904 image ls --format table --alsologtostderr:
I1006 02:22:43.654406 2296510 out.go:296] Setting OutFile to fd 1 ...
I1006 02:22:43.654592 2296510 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 02:22:43.654627 2296510 out.go:309] Setting ErrFile to fd 2...
I1006 02:22:43.654650 2296510 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 02:22:43.655015 2296510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
I1006 02:22:43.656080 2296510 config.go:182] Loaded profile config "functional-642904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1006 02:22:43.656296 2296510 config.go:182] Loaded profile config "functional-642904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1006 02:22:43.657156 2296510 cli_runner.go:164] Run: docker container inspect functional-642904 --format={{.State.Status}}
I1006 02:22:43.676418 2296510 ssh_runner.go:195] Run: systemctl --version
I1006 02:22:43.676470 2296510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642904
I1006 02:22:43.694893 2296510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/functional-642904/id_rsa Username:docker}
I1006 02:22:43.789322 2296510 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-642904 image ls --format json --alsologtostderr:
[{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b","repoDigests":["docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef","docker.io/library/nginx@sha256:96032dda68e09456804a4939486df02acd5459c1e2b81c0eed017130098ca003"],"repoTags":["docker.io
/library/nginx:alpine"],"size":"45331256"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-642904"],"size":"34114467"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73","repoDigests":["docker.io/
library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755","docker.io/library/nginx@sha256:65cd8f49af749786a95ea0c46a76c3269bb21cfcb0f0a81d2bbf0def96fb6324"],"repoTags":["docker.io/library/nginx:latest"],"size":"196196620"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac
5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"121054158"},{"id":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8","registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"117187380"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"4ad9a9b00900b1207550caaf760fb84fac748711e29f485
b7d7358a0c0e4d3e6","repoDigests":["docker.io/library/96c9d7c7ef57b05e976e0fe2ce1277bfe59c840dd869d8899f75afddebf007b6-tmp@sha256:1423fafee6ed561968933a997a41df5d9c8d36c4bad48ed7faba4025c87925fe"],"repoTags":[],"size":"1637644"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781
e3e0e6dddfa","repoDigests":["registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf","registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"69926807"},{"id":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab","registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"59188020"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/paus
e@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"19b4adf6cb3f94610053975a136f17868b285db30176fda79f2ad7367142fea3","repoDigests":["localhost/my-image@sha256:e2741234c6ed007bf50c166095e02b975
837263373a95656c1ca3d863938ec0c"],"repoTags":["localhost/my-image:functional-642904"],"size":"1640226"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-642904 image ls --format json --alsologtostderr:
I1006 02:22:43.380038 2296484 out.go:296] Setting OutFile to fd 1 ...
I1006 02:22:43.380255 2296484 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 02:22:43.380285 2296484 out.go:309] Setting ErrFile to fd 2...
I1006 02:22:43.380307 2296484 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 02:22:43.380587 2296484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
I1006 02:22:43.381263 2296484 config.go:182] Loaded profile config "functional-642904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1006 02:22:43.381453 2296484 config.go:182] Loaded profile config "functional-642904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1006 02:22:43.382047 2296484 cli_runner.go:164] Run: docker container inspect functional-642904 --format={{.State.Status}}
I1006 02:22:43.401326 2296484 ssh_runner.go:195] Run: systemctl --version
I1006 02:22:43.401385 2296484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642904
I1006 02:22:43.421333 2296484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/functional-642904/id_rsa Username:docker}
I1006 02:22:43.512719 2296484 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-642904 image ls --format yaml --alsologtostderr:
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "121054158"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-642904
size: "34114467"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
- registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "59188020"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73
repoDigests:
- docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755
- docker.io/library/nginx@sha256:65cd8f49af749786a95ea0c46a76c3269bb21cfcb0f0a81d2bbf0def96fb6324
repoTags:
- docker.io/library/nginx:latest
size: "196196620"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "117187380"
- id: 7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests:
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
- registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "69926807"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b
repoDigests:
- docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef
- docker.io/library/nginx@sha256:96032dda68e09456804a4939486df02acd5459c1e2b81c0eed017130098ca003
repoTags:
- docker.io/library/nginx:alpine
size: "45331256"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-642904 image ls --format yaml --alsologtostderr:
I1006 02:22:37.853746 2296167 out.go:296] Setting OutFile to fd 1 ...
I1006 02:22:37.854083 2296167 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 02:22:37.854119 2296167 out.go:309] Setting ErrFile to fd 2...
I1006 02:22:37.854139 2296167 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 02:22:37.854740 2296167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
I1006 02:22:37.855887 2296167 config.go:182] Loaded profile config "functional-642904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1006 02:22:37.856100 2296167 config.go:182] Loaded profile config "functional-642904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1006 02:22:37.856666 2296167 cli_runner.go:164] Run: docker container inspect functional-642904 --format={{.State.Status}}
I1006 02:22:37.875059 2296167 ssh_runner.go:195] Run: systemctl --version
I1006 02:22:37.875111 2296167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642904
I1006 02:22:37.895364 2296167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/functional-642904/id_rsa Username:docker}
I1006 02:22:37.991098 2296167 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642904 ssh pgrep buildkitd: exit status 1 (326.034983ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image build -t localhost/my-image:functional-642904 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 image build -t localhost/my-image:functional-642904 testdata/build --alsologtostderr: (4.680106672s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-642904 image build -t localhost/my-image:functional-642904 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4ad9a9b0090
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-642904
--> 19b4adf6cb3
Successfully tagged localhost/my-image:functional-642904
19b4adf6cb3f94610053975a136f17868b285db30176fda79f2ad7367142fea3
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-642904 image build -t localhost/my-image:functional-642904 testdata/build --alsologtostderr:
I1006 02:22:38.469144 2296241 out.go:296] Setting OutFile to fd 1 ...
I1006 02:22:38.470574 2296241 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 02:22:38.470608 2296241 out.go:309] Setting ErrFile to fd 2...
I1006 02:22:38.470631 2296241 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 02:22:38.470914 2296241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
I1006 02:22:38.471716 2296241 config.go:182] Loaded profile config "functional-642904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1006 02:22:38.472643 2296241 config.go:182] Loaded profile config "functional-642904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1006 02:22:38.473192 2296241 cli_runner.go:164] Run: docker container inspect functional-642904 --format={{.State.Status}}
I1006 02:22:38.493510 2296241 ssh_runner.go:195] Run: systemctl --version
I1006 02:22:38.493559 2296241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-642904
I1006 02:22:38.522653 2296241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/functional-642904/id_rsa Username:docker}
I1006 02:22:38.618737 2296241 build_images.go:151] Building image from path: /tmp/build.706089138.tar
I1006 02:22:38.618808 2296241 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1006 02:22:38.634192 2296241 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.706089138.tar
I1006 02:22:38.643262 2296241 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.706089138.tar: stat -c "%s %y" /var/lib/minikube/build/build.706089138.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.706089138.tar': No such file or directory
I1006 02:22:38.643291 2296241 ssh_runner.go:362] scp /tmp/build.706089138.tar --> /var/lib/minikube/build/build.706089138.tar (3072 bytes)
I1006 02:22:38.681902 2296241 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.706089138
I1006 02:22:38.695212 2296241 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.706089138 -xf /var/lib/minikube/build/build.706089138.tar
I1006 02:22:38.721994 2296241 crio.go:297] Building image: /var/lib/minikube/build/build.706089138
I1006 02:22:38.722136 2296241 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-642904 /var/lib/minikube/build/build.706089138 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1006 02:22:43.020022 2296241 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-642904 /var/lib/minikube/build/build.706089138 --cgroup-manager=cgroupfs: (4.297835929s)
I1006 02:22:43.020109 2296241 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.706089138
I1006 02:22:43.031137 2296241 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.706089138.tar
I1006 02:22:43.041850 2296241 build_images.go:207] Built localhost/my-image:functional-642904 from /tmp/build.706089138.tar
I1006 02:22:43.041937 2296241 build_images.go:123] succeeded building to: functional-642904
I1006 02:22:43.041948 2296241 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.533687221s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-642904
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image load --daemon gcr.io/google-containers/addon-resizer:functional-642904 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 image load --daemon gcr.io/google-containers/addon-resizer:functional-642904 --alsologtostderr: (3.942351684s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image load --daemon gcr.io/google-containers/addon-resizer:functional-642904 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 image load --daemon gcr.io/google-containers/addon-resizer:functional-642904 --alsologtostderr: (2.714834096s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.538779423s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-642904
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image load --daemon gcr.io/google-containers/addon-resizer:functional-642904 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 image load --daemon gcr.io/google-containers/addon-resizer:functional-642904 --alsologtostderr: (4.397802462s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-642904 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.107.116 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-642904 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-642904 /tmp/TestFunctionalparallelMountCmdany-port2088229186/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696558918806173250" to /tmp/TestFunctionalparallelMountCmdany-port2088229186/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696558918806173250" to /tmp/TestFunctionalparallelMountCmdany-port2088229186/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696558918806173250" to /tmp/TestFunctionalparallelMountCmdany-port2088229186/001/test-1696558918806173250
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642904 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (552.837415ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  6 02:21 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  6 02:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  6 02:21 test-1696558918806173250
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh cat /mount-9p/test-1696558918806173250
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-642904 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [717ff409-9a33-42c4-8ec2-10faa1f85af9] Pending
helpers_test.go:344: "busybox-mount" [717ff409-9a33-42c4-8ec2-10faa1f85af9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [717ff409-9a33-42c4-8ec2-10faa1f85af9] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [717ff409-9a33-42c4-8ec2-10faa1f85af9] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.023698271s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-642904 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-642904 /tmp/TestFunctionalparallelMountCmdany-port2088229186/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image save gcr.io/google-containers/addon-resizer:functional-642904 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 image save gcr.io/google-containers/addon-resizer:functional-642904 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.180238086s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image rm gcr.io/google-containers/addon-resizer:functional-642904 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.078563492s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-642904
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 image save --daemon gcr.io/google-containers/addon-resizer:functional-642904 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-642904 image save --daemon gcr.io/google-containers/addon-resizer:functional-642904 --alsologtostderr: (1.043553169s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-642904
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-642904 /tmp/TestFunctionalparallelMountCmdspecific-port1018501824/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642904 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (392.942228ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-642904 /tmp/TestFunctionalparallelMountCmdspecific-port1018501824/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642904 ssh "sudo umount -f /mount-9p": exit status 1 (510.819684ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-642904 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-642904 /tmp/TestFunctionalparallelMountCmdspecific-port1018501824/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-642904 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3855903775/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-642904 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3855903775/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-642904 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3855903775/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-642904 ssh "findmnt -T" /mount1: exit status 1 (741.650437ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-642904 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-642904 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3855903775/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-642904 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3855903775/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-642904 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3855903775/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-642904 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-642904 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-dvlq9" [65c95db2-2655-4022-859d-a2d365b311e6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-dvlq9" [65c95db2-2655-4022-859d-a2d365b311e6] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.016775693s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "380.922818ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "105.385008ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "359.820996ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "76.565582ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 service list -o json
functional_test.go:1493: Took "660.479707ms" to run "out/minikube-linux-arm64 -p functional-642904 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30789
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-642904 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30789
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-642904
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-642904
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-642904
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (87.99s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-923493 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-923493 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m27.991044207s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (87.99s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.98s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-923493 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-923493 addons enable ingress --alsologtostderr -v=5: (11.977521217s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.98s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-923493 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-112239 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1006 02:28:03.546210 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-112239 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m20.102029699s)
--- PASS: TestJSONOutput/start/Command (80.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.83s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-112239 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.83s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-112239 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-112239 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-112239 --output=json --user=testUser: (5.953255741s)
--- PASS: TestJSONOutput/stop/Command (5.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-426217 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-426217 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (104.637423ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fe64fa64-8d8b-4a43-a6ca-22badfbd191b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-426217] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9fb7d14-f954-42a2-b9f1-2632f57f6718","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17314"}}
	{"specversion":"1.0","id":"135483b5-8890-4c60-b45d-e37bf8d4ccdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f353b879-fe65-46ad-90ca-30520c5f065c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig"}}
	{"specversion":"1.0","id":"47f61a77-bb36-48cc-bf68-0e6faf89b0dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube"}}
	{"specversion":"1.0","id":"e2a27205-f562-4af0-9c43-dc7d63ac3f9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e54d04ac-4baa-4078-bced-20ab26530d9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"85802086-5d5a-43f0-88e3-52cc21622bd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-426217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-426217
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-694718 --network=
E1006 02:29:25.466416 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:29:27.711207 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:29:27.716397 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:29:27.726617 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:29:27.747280 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:29:27.787917 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:29:27.868588 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:29:28.029344 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:29:28.350284 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:29:28.991185 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:29:30.272298 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:29:32.833541 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:29:33.943355 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:29:37.954481 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-694718 --network=: (39.906478485s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-694718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-694718
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-694718: (2.133886405s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.06s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-268958 --network=bridge
E1006 02:29:48.194692 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:30:08.675147 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-268958 --network=bridge: (32.368933436s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-268958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-268958
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-268958: (2.043675384s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.45s)

                                                
                                    
x
+
TestKicExistingNetwork (37.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-047452 --network=existing-network
E1006 02:30:49.637178 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-047452 --network=existing-network: (35.436726834s)
helpers_test.go:175: Cleaning up "existing-network-047452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-047452
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-047452: (2.042922969s)
--- PASS: TestKicExistingNetwork (37.65s)

                                                
                                    
x
+
TestKicCustomSubnet (35.87s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-215398 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-215398 --subnet=192.168.60.0/24: (33.769907968s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-215398 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-215398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-215398
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-215398: (2.071163915s)
--- PASS: TestKicCustomSubnet (35.87s)

                                                
                                    
x
+
TestKicStaticIP (40.72s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-847947 --static-ip=192.168.200.200
E1006 02:31:41.623630 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 02:32:09.307506 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-847947 --static-ip=192.168.200.200: (38.37451832s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-847947 ip
helpers_test.go:175: Cleaning up "static-ip-847947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-847947
E1006 02:32:11.557417 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-847947: (2.165730823s)
--- PASS: TestKicStaticIP (40.72s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (71.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-280458 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-280458 --driver=docker  --container-runtime=crio: (30.634900173s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-282830 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-282830 --driver=docker  --container-runtime=crio: (35.097808589s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-280458
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-282830
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-282830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-282830
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-282830: (1.993739377s)
helpers_test.go:175: Cleaning up "first-280458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-280458
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-280458: (1.983322225s)
--- PASS: TestMinikubeProfile (71.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-056662 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-056662 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.931105617s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-056662 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-058500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-058500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.109642916s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-058500 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-056662 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-056662 --alsologtostderr -v=5: (1.689882111s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-058500 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-058500
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-058500: (1.241595473s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-058500
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-058500: (7.015083018s)
--- PASS: TestMountStart/serial/RestartStopped (8.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-058500 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-951739 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1006 02:34:27.706367 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:34:33.943442 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:34:55.398222 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:35:56.988659 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-951739 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m7.377849606s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-951739 -- rollout status deployment/busybox: (4.656618803s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- exec busybox-5bc68d56bd-qkd4k -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- exec busybox-5bc68d56bd-z7b7t -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- exec busybox-5bc68d56bd-qkd4k -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- exec busybox-5bc68d56bd-z7b7t -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- exec busybox-5bc68d56bd-qkd4k -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-951739 -- exec busybox-5bc68d56bd-z7b7t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-951739 -v 3 --alsologtostderr
E1006 02:36:41.624279 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-951739 -v 3 --alsologtostderr: (49.349620478s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 cp testdata/cp-test.txt multinode-951739:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 cp multinode-951739:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2514097087/001/cp-test_multinode-951739.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 cp multinode-951739:/home/docker/cp-test.txt multinode-951739-m02:/home/docker/cp-test_multinode-951739_multinode-951739-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739-m02 "sudo cat /home/docker/cp-test_multinode-951739_multinode-951739-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 cp multinode-951739:/home/docker/cp-test.txt multinode-951739-m03:/home/docker/cp-test_multinode-951739_multinode-951739-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739-m03 "sudo cat /home/docker/cp-test_multinode-951739_multinode-951739-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 cp testdata/cp-test.txt multinode-951739-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 cp multinode-951739-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2514097087/001/cp-test_multinode-951739-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 cp multinode-951739-m02:/home/docker/cp-test.txt multinode-951739:/home/docker/cp-test_multinode-951739-m02_multinode-951739.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739 "sudo cat /home/docker/cp-test_multinode-951739-m02_multinode-951739.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 cp multinode-951739-m02:/home/docker/cp-test.txt multinode-951739-m03:/home/docker/cp-test_multinode-951739-m02_multinode-951739-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739-m03 "sudo cat /home/docker/cp-test_multinode-951739-m02_multinode-951739-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 cp testdata/cp-test.txt multinode-951739-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 cp multinode-951739-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2514097087/001/cp-test_multinode-951739-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 cp multinode-951739-m03:/home/docker/cp-test.txt multinode-951739:/home/docker/cp-test_multinode-951739-m03_multinode-951739.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739 "sudo cat /home/docker/cp-test_multinode-951739-m03_multinode-951739.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 cp multinode-951739-m03:/home/docker/cp-test.txt multinode-951739-m02:/home/docker/cp-test_multinode-951739-m03_multinode-951739-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 ssh -n multinode-951739-m02 "sudo cat /home/docker/cp-test_multinode-951739-m03_multinode-951739-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.45s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-951739 node stop m03: (1.24564569s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-951739 status: exit status 7 (565.848836ms)

                                                
                                                
-- stdout --
	multinode-951739
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-951739-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-951739-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-951739 status --alsologtostderr: exit status 7 (618.228201ms)

                                                
                                                
-- stdout --
	multinode-951739
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-951739-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-951739-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 02:37:17.975622 2343245 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:37:17.975753 2343245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:37:17.975763 2343245 out.go:309] Setting ErrFile to fd 2...
	I1006 02:37:17.975769 2343245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:37:17.975998 2343245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:37:17.976182 2343245 out.go:303] Setting JSON to false
	I1006 02:37:17.976278 2343245 mustload.go:65] Loading cluster: multinode-951739
	I1006 02:37:17.976355 2343245 notify.go:220] Checking for updates...
	I1006 02:37:17.976697 2343245 config.go:182] Loaded profile config "multinode-951739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:37:17.976707 2343245 status.go:255] checking status of multinode-951739 ...
	I1006 02:37:17.977205 2343245 cli_runner.go:164] Run: docker container inspect multinode-951739 --format={{.State.Status}}
	I1006 02:37:18.013225 2343245 status.go:330] multinode-951739 host status = "Running" (err=<nil>)
	I1006 02:37:18.013281 2343245 host.go:66] Checking if "multinode-951739" exists ...
	I1006 02:37:18.013822 2343245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951739
	I1006 02:37:18.056885 2343245 host.go:66] Checking if "multinode-951739" exists ...
	I1006 02:37:18.057194 2343245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:37:18.057250 2343245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739
	I1006 02:37:18.078179 2343245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35339 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739/id_rsa Username:docker}
	I1006 02:37:18.170699 2343245 ssh_runner.go:195] Run: systemctl --version
	I1006 02:37:18.177456 2343245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:37:18.192019 2343245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:37:18.271273 2343245 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-06 02:37:18.260363083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:37:18.271865 2343245 kubeconfig.go:92] found "multinode-951739" server: "https://192.168.58.2:8443"
	I1006 02:37:18.271887 2343245 api_server.go:166] Checking apiserver status ...
	I1006 02:37:18.271928 2343245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 02:37:18.285316 2343245 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1247/cgroup
	I1006 02:37:18.296982 2343245 api_server.go:182] apiserver freezer: "3:freezer:/docker/9edc152e73eb4597075d85be5cd5b8d85a9eb4dcbd14cb603ed7c177be913edc/crio/crio-10d6403370796cae7254428f4066e97e6b1b8683539dcf4e973259dec188288c"
	I1006 02:37:18.297065 2343245 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9edc152e73eb4597075d85be5cd5b8d85a9eb4dcbd14cb603ed7c177be913edc/crio/crio-10d6403370796cae7254428f4066e97e6b1b8683539dcf4e973259dec188288c/freezer.state
	I1006 02:37:18.307719 2343245 api_server.go:204] freezer state: "THAWED"
	I1006 02:37:18.307753 2343245 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1006 02:37:18.317168 2343245 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1006 02:37:18.317204 2343245 status.go:421] multinode-951739 apiserver status = Running (err=<nil>)
	I1006 02:37:18.317240 2343245 status.go:257] multinode-951739 status: &{Name:multinode-951739 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 02:37:18.317264 2343245 status.go:255] checking status of multinode-951739-m02 ...
	I1006 02:37:18.317604 2343245 cli_runner.go:164] Run: docker container inspect multinode-951739-m02 --format={{.State.Status}}
	I1006 02:37:18.335636 2343245 status.go:330] multinode-951739-m02 host status = "Running" (err=<nil>)
	I1006 02:37:18.335660 2343245 host.go:66] Checking if "multinode-951739-m02" exists ...
	I1006 02:37:18.336038 2343245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951739-m02
	I1006 02:37:18.360789 2343245 host.go:66] Checking if "multinode-951739-m02" exists ...
	I1006 02:37:18.361102 2343245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 02:37:18.361140 2343245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951739-m02
	I1006 02:37:18.384294 2343245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35344 SSHKeyPath:/home/jenkins/minikube-integration/17314-2262959/.minikube/machines/multinode-951739-m02/id_rsa Username:docker}
	I1006 02:37:18.481242 2343245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 02:37:18.494408 2343245 status.go:257] multinode-951739-m02 status: &{Name:multinode-951739-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1006 02:37:18.494439 2343245 status.go:255] checking status of multinode-951739-m03 ...
	I1006 02:37:18.494746 2343245 cli_runner.go:164] Run: docker container inspect multinode-951739-m03 --format={{.State.Status}}
	I1006 02:37:18.513489 2343245 status.go:330] multinode-951739-m03 host status = "Stopped" (err=<nil>)
	I1006 02:37:18.513512 2343245 status.go:343] host is not running, skipping remaining checks
	I1006 02:37:18.513520 2343245 status.go:257] multinode-951739-m03 status: &{Name:multinode-951739-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-951739 node start m03 --alsologtostderr: (12.007461307s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (124.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-951739
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-951739
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-951739: (25.069216974s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-951739 --wait=true -v=8 --alsologtostderr
E1006 02:39:27.706615 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:39:33.943005 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-951739 --wait=true -v=8 --alsologtostderr: (1m38.968639766s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-951739
--- PASS: TestMultiNode/serial/RestartKeepsNodes (124.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-951739 node delete m03: (4.387235594s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-951739 stop: (23.950733623s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-951739 status: exit status 7 (105.84629ms)

                                                
                                                
-- stdout --
	multinode-951739
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-951739-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-951739 status --alsologtostderr: exit status 7 (103.057072ms)

                                                
                                                
-- stdout --
	multinode-951739
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-951739-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 02:40:04.872907 2351349 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:40:04.873068 2351349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:40:04.873077 2351349 out.go:309] Setting ErrFile to fd 2...
	I1006 02:40:04.873083 2351349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:40:04.873320 2351349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:40:04.873506 2351349 out.go:303] Setting JSON to false
	I1006 02:40:04.873593 2351349 mustload.go:65] Loading cluster: multinode-951739
	I1006 02:40:04.873670 2351349 notify.go:220] Checking for updates...
	I1006 02:40:04.874022 2351349 config.go:182] Loaded profile config "multinode-951739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:40:04.874033 2351349 status.go:255] checking status of multinode-951739 ...
	I1006 02:40:04.874619 2351349 cli_runner.go:164] Run: docker container inspect multinode-951739 --format={{.State.Status}}
	I1006 02:40:04.893561 2351349 status.go:330] multinode-951739 host status = "Stopped" (err=<nil>)
	I1006 02:40:04.893583 2351349 status.go:343] host is not running, skipping remaining checks
	I1006 02:40:04.893590 2351349 status.go:257] multinode-951739 status: &{Name:multinode-951739 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 02:40:04.893616 2351349 status.go:255] checking status of multinode-951739-m02 ...
	I1006 02:40:04.893957 2351349 cli_runner.go:164] Run: docker container inspect multinode-951739-m02 --format={{.State.Status}}
	I1006 02:40:04.911726 2351349 status.go:330] multinode-951739-m02 host status = "Stopped" (err=<nil>)
	I1006 02:40:04.911750 2351349 status.go:343] host is not running, skipping remaining checks
	I1006 02:40:04.911758 2351349 status.go:257] multinode-951739-m02 status: &{Name:multinode-951739-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-951739 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-951739 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m19.252867809s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-951739 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-951739
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-951739-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-951739-m02 --driver=docker  --container-runtime=crio: exit status 14 (109.488858ms)

                                                
                                                
-- stdout --
	* [multinode-951739-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-951739-m02' is duplicated with machine name 'multinode-951739-m02' in profile 'multinode-951739'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-951739-m03 --driver=docker  --container-runtime=crio
E1006 02:41:41.623804 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-951739-m03 --driver=docker  --container-runtime=crio: (35.828114774s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-951739
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-951739: exit status 80 (352.316304ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-951739
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-951739-m03 already exists in multinode-951739-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-951739-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-951739-m03: (1.979280518s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.34s)

                                                
                                    
x
+
TestPreload (169.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-002432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1006 02:43:04.668593 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-002432 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m23.351448293s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-002432 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-002432 image pull gcr.io/k8s-minikube/busybox: (2.283167978s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-002432
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-002432: (5.8597858s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-002432 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1006 02:44:27.707260 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:44:33.942716 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-002432 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m15.589526879s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-002432 image list
helpers_test.go:175: Cleaning up "test-preload-002432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-002432
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-002432: (2.371720262s)
--- PASS: TestPreload (169.71s)

                                                
                                    
x
+
TestScheduledStopUnix (110.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-267345 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-267345 --memory=2048 --driver=docker  --container-runtime=crio: (33.759919546s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-267345 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-267345 -n scheduled-stop-267345
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-267345 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-267345 --cancel-scheduled
E1006 02:45:50.758453 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-267345 -n scheduled-stop-267345
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-267345
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-267345 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1006 02:46:41.624606 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-267345
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-267345: exit status 7 (130.83258ms)

                                                
                                                
-- stdout --
	scheduled-stop-267345
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-267345 -n scheduled-stop-267345
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-267345 -n scheduled-stop-267345: exit status 7 (108.124759ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-267345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-267345
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-267345: (5.149577565s)
--- PASS: TestScheduledStopUnix (110.80s)

                                                
                                    
x
+
TestInsufficientStorage (10.77s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-635459 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-635459 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.124830488s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"165169df-4095-4f10-8f1a-51e43946c760","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-635459] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e827803-bd68-4518-9708-da177018c775","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17314"}}
	{"specversion":"1.0","id":"980cc094-3ed1-4668-af6e-b65999b81d5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0321af23-e2ea-4971-a895-9c0382d6eae6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig"}}
	{"specversion":"1.0","id":"ecacf5f9-06e3-4ea8-a56d-5c12954d945c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube"}}
	{"specversion":"1.0","id":"6a3a633e-e803-4e02-b3fb-04d01b85a9a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c2df0320-44ee-482b-a4f6-c8c6a51dae63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"59508870-b3a4-4895-bd2b-23d3d83e90bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"bea31f32-763c-456e-a6ad-a3304da71f3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"25a844c6-722f-4ffa-b1bd-6b98459f4f57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ecf5ed9-fa0e-44fa-bc8a-c58ef27d0417","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"68957b04-d8bc-4a81-bff6-c89827440c07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-635459 in cluster insufficient-storage-635459","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"44dbebdc-9f65-4338-9d56-159c00f534b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb30b59a-ea74-4a8f-a034-5db2b534f060","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6fe924b0-3dcf-48c7-804b-dea92d53ec08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-635459 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-635459 --output=json --layout=cluster: exit status 7 (340.422245ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-635459","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-635459","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 02:46:58.782368 2368302 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-635459" does not appear in /home/jenkins/minikube-integration/17314-2262959/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-635459 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-635459 --output=json --layout=cluster: exit status 7 (338.700219ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-635459","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-635459","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 02:46:59.123156 2368356 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-635459" does not appear in /home/jenkins/minikube-integration/17314-2262959/kubeconfig
	E1006 02:46:59.135779 2368356 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/insufficient-storage-635459/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-635459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-635459
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-635459: (1.963541215s)
--- PASS: TestInsufficientStorage (10.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (168.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-367582 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-367582 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.538932395s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-367582
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-367582: (1.456898639s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-367582 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-367582 status --format={{.Host}}: exit status 7 (103.460993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-367582 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1006 02:49:33.942957 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-367582 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.719720126s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-367582 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-367582 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-367582 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (259.4726ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-367582] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-367582
	    minikube start -p kubernetes-upgrade-367582 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3675822 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-367582 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-367582 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-367582 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m0.121854096s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-367582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-367582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-367582: (3.471941148s)
--- PASS: TestKubernetesUpgrade (168.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-316798 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-316798 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (94.190224ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-316798] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-316798 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-316798 --driver=docker  --container-runtime=crio: (46.899177768s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-316798 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-316798 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-316798 --no-kubernetes --driver=docker  --container-runtime=crio: (5.923103142s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-316798 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-316798 status -o json: exit status 2 (416.020781ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-316798","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-316798
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-316798: (2.071580398s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-316798 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-316798 --no-kubernetes --driver=docker  --container-runtime=crio: (10.183971979s)
--- PASS: TestNoKubernetes/serial/Start (10.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-316798 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-316798 "sudo systemctl is-active --quiet service kubelet": exit status 1 (391.22505ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-316798
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-316798: (1.316148323s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-316798 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-316798 --driver=docker  --container-runtime=crio: (7.651241526s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-316798 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-316798 "sudo systemctl is-active --quiet service kubelet": exit status 1 (467.245353ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-670887
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                    
x
+
TestPause/serial/Start (86.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-647181 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1006 02:51:41.623697 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-647181 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m26.377700728s)
--- PASS: TestPause/serial/Start (86.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-084205 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-084205 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (219.479408ms)

                                                
                                                
-- stdout --
	* [false-084205] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 02:52:47.428395 2401093 out.go:296] Setting OutFile to fd 1 ...
	I1006 02:52:47.428560 2401093 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:52:47.428570 2401093 out.go:309] Setting ErrFile to fd 2...
	I1006 02:52:47.428576 2401093 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 02:52:47.428829 2401093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-2262959/.minikube/bin
	I1006 02:52:47.429229 2401093 out.go:303] Setting JSON to false
	I1006 02:52:47.430345 2401093 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":45314,"bootTime":1696515454,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1006 02:52:47.430423 2401093 start.go:138] virtualization:  
	I1006 02:52:47.433148 2401093 out.go:177] * [false-084205] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1006 02:52:47.435796 2401093 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 02:52:47.437698 2401093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 02:52:47.435951 2401093 notify.go:220] Checking for updates...
	I1006 02:52:47.441380 2401093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-2262959/kubeconfig
	I1006 02:52:47.443292 2401093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-2262959/.minikube
	I1006 02:52:47.445221 2401093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 02:52:47.447401 2401093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 02:52:47.450904 2401093 config.go:182] Loaded profile config "pause-647181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1006 02:52:47.451156 2401093 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 02:52:47.475488 2401093 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1006 02:52:47.475597 2401093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 02:52:47.564498 2401093 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-06 02:52:47.553761505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1006 02:52:47.564621 2401093 docker.go:295] overlay module found
	I1006 02:52:47.567186 2401093 out.go:177] * Using the docker driver based on user configuration
	I1006 02:52:47.569195 2401093 start.go:298] selected driver: docker
	I1006 02:52:47.569206 2401093 start.go:902] validating driver "docker" against <nil>
	I1006 02:52:47.569219 2401093 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 02:52:47.572026 2401093 out.go:177] 
	W1006 02:52:47.574108 2401093 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1006 02:52:47.576326 2401093 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-084205 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-084205

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-084205

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-084205

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-084205

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-084205

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-084205

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-084205

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-084205

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-084205

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-084205

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-084205

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-084205" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-084205" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 06 Oct 2023 02:52:00 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-647181
contexts:
- context:
cluster: pause-647181
extensions:
- extension:
last-update: Fri, 06 Oct 2023 02:52:00 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-647181
name: pause-647181
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-647181
user:
client-certificate: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.crt
client-key: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-084205

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084205"

                                                
                                                
----------------------- debugLogs end: false-084205 [took: 3.841113278s] --------------------------------
helpers_test.go:175: Cleaning up "false-084205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-084205
--- PASS: TestNetworkPlugins/group/false (4.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (124.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-477664 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1006 02:56:41.625053 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-477664 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m4.093813602s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (124.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-477664 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b7ce6879-7e4c-4894-b929-7968e3540347] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b7ce6879-7e4c-4894-b929-7968e3540347] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.032517529s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-477664 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-477664 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-477664 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-477664 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-477664 --alsologtostderr -v=3: (12.207584823s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-477664 -n old-k8s-version-477664
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-477664 -n old-k8s-version-477664: exit status 7 (99.542078ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-477664 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (454.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-477664 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-477664 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m33.95395995s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-477664 -n old-k8s-version-477664
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (454.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-808742 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-808742 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m6.218500441s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-808742 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f72f5420-54ac-4fb3-ba4a-6c9d271cdf55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f72f5420-54ac-4fb3-ba4a-6c9d271cdf55] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.029319403s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-808742 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-808742 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-808742 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.084754195s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-808742 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-808742 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-808742 --alsologtostderr -v=3: (12.127118911s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-808742 -n no-preload-808742
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-808742 -n no-preload-808742: exit status 7 (96.544552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-808742 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (347.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-808742 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1006 02:59:27.706655 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 02:59:33.943460 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 02:59:44.669164 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 03:01:41.624632 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 03:02:30.759324 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 03:04:27.707280 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 03:04:33.943176 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-808742 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m47.07769978s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-808742 -n no-preload-808742
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (347.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gnn22" [bc14fb89-3406-4c40-a775-b26f8d1d6ac0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02894267s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gnn22" [bc14fb89-3406-4c40-a775-b26f8d1d6ac0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00950342s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-477664 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-477664 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-477664 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-477664 --alsologtostderr -v=1: (1.01093907s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-477664 -n old-k8s-version-477664
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-477664 -n old-k8s-version-477664: exit status 2 (379.162184ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-477664 -n old-k8s-version-477664
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-477664 -n old-k8s-version-477664: exit status 2 (389.140191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-477664 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-477664 -n old-k8s-version-477664
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-477664 -n old-k8s-version-477664
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-946916 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-946916 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m24.923301628s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mx5fp" [f6f5f69f-97e8-4ec0-883f-f6699a2e4945] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mx5fp" [f6f5f69f-97e8-4ec0-883f-f6699a2e4945] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.029313746s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mx5fp" [f6f5f69f-97e8-4ec0-883f-f6699a2e4945] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02940754s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-808742 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-808742 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-808742 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-808742 --alsologtostderr -v=1: (1.104783624s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-808742 -n no-preload-808742
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-808742 -n no-preload-808742: exit status 2 (456.91941ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-808742 -n no-preload-808742
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-808742 -n no-preload-808742: exit status 2 (423.796895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-808742 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-808742 --alsologtostderr -v=1: (1.142352714s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-808742 -n no-preload-808742
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-808742 -n no-preload-808742
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-411157 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-411157 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (53.859111448s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-946916 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5b6adfc0-cd78-4b13-9483-3120ec10c88d] Pending
helpers_test.go:344: "busybox" [5b6adfc0-cd78-4b13-9483-3120ec10c88d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5b6adfc0-cd78-4b13-9483-3120ec10c88d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.037102429s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-946916 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-411157 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b870a106-b019-43ea-9a87-f5e47fdbd3a8] Pending
helpers_test.go:344: "busybox" [b870a106-b019-43ea-9a87-f5e47fdbd3a8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b870a106-b019-43ea-9a87-f5e47fdbd3a8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.038180873s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-411157 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-946916 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-946916 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.094222629s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-946916 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-946916 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-946916 --alsologtostderr -v=3: (12.186103812s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-411157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-411157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.141307367s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-411157 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-411157 --alsologtostderr -v=3
E1006 03:06:41.623778 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 03:06:47.554001 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:06:47.559627 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:06:47.569974 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:06:47.590319 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:06:47.630640 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:06:47.711120 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:06:47.871612 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:06:48.192269 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:06:48.833303 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:06:50.114129 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-411157 --alsologtostderr -v=3: (12.042636117s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-946916 -n embed-certs-946916
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-946916 -n embed-certs-946916: exit status 7 (82.18963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-946916 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (629.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-946916 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1006 03:06:52.674387 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-946916 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (10m29.390873269s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-946916 -n embed-certs-946916
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (629.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-411157 -n default-k8s-diff-port-411157
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-411157 -n default-k8s-diff-port-411157: exit status 7 (85.416781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-411157 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-411157 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1006 03:06:57.794976 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:07:08.035731 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:07:28.516837 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:08:09.477118 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:08:59.903545 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:08:59.908865 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:08:59.919155 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:08:59.939394 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:08:59.979632 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:09:00.062540 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:09:00.222934 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:09:00.543430 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:09:01.184307 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:09:02.464892 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:09:05.025042 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:09:10.146074 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:09:16.989268 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 03:09:20.386259 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:09:27.706848 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 03:09:31.398102 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:09:33.942774 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
E1006 03:09:40.867370 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:10:21.828067 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:11:41.623761 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 03:11:43.749264 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:11:47.554193 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:12:15.238304 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-411157 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m53.801225319s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-411157 -n default-k8s-diff-port-411157
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xkh9p" [b6629afa-95cf-4dc6-ad0f-54cd1907ff23] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xkh9p" [b6629afa-95cf-4dc6-ad0f-54cd1907ff23] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.078950756s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xkh9p" [b6629afa-95cf-4dc6-ad0f-54cd1907ff23] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012492276s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-411157 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-411157 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-411157 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-411157 -n default-k8s-diff-port-411157
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-411157 -n default-k8s-diff-port-411157: exit status 2 (373.222516ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-411157 -n default-k8s-diff-port-411157
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-411157 -n default-k8s-diff-port-411157: exit status 2 (356.901847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-411157 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-411157 -n default-k8s-diff-port-411157
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-411157 -n default-k8s-diff-port-411157
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-948099 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1006 03:13:59.903839 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-948099 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (44.411241979s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-948099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-948099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.154696741s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-948099 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-948099 --alsologtostderr -v=3: (1.279732665s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948099 -n newest-cni-948099
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948099 -n newest-cni-948099: exit status 7 (104.223816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-948099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-948099 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1006 03:14:27.589614 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
E1006 03:14:27.706335 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/ingress-addon-legacy-923493/client.crt: no such file or directory
E1006 03:14:33.942605 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/addons-891734/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-948099 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (31.795065043s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948099 -n newest-cni-948099
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-948099 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-948099 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-948099 -n newest-cni-948099
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-948099 -n newest-cni-948099: exit status 2 (396.302363ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-948099 -n newest-cni-948099
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-948099 -n newest-cni-948099: exit status 2 (387.028646ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-948099 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-948099 -n newest-cni-948099
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-948099 -n newest-cni-948099
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m21.673569784s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-084205 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-084205 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kkcdw" [ee705690-9c2d-4e44-97bf-294a6ccd39a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kkcdw" [ee705690-9c2d-4e44-97bf-294a6ccd39a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.013008983s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-084205 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1006 03:16:40.897725 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/default-k8s-diff-port-411157/client.crt: no such file or directory
E1006 03:16:41.624000 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
E1006 03:16:47.554105 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
E1006 03:16:51.138005 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/default-k8s-diff-port-411157/client.crt: no such file or directory
E1006 03:17:11.618810 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/default-k8s-diff-port-411157/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m20.815559986s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4cttx" [7221f700-6dc7-4b5d-bc08-1617ddf643ff] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.024804268s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4cttx" [7221f700-6dc7-4b5d-bc08-1617ddf643ff] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012876278s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-946916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-946916 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-946916 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-946916 --alsologtostderr -v=1: (1.00861289s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-946916 -n embed-certs-946916
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-946916 -n embed-certs-946916: exit status 2 (366.911435ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-946916 -n embed-certs-946916
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-946916 -n embed-certs-946916: exit status 2 (379.12767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-946916 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-946916 -n embed-certs-946916
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-946916 -n embed-certs-946916
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.63s)
E1006 03:22:27.935914 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/auto-084205/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1006 03:17:52.579013 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/default-k8s-diff-port-411157/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m12.007375546s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-p8n95" [10360360-6fda-414e-aecd-cbb233f58b2e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.029926854s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-084205 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-084205 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lm99v" [5e60596c-79c8-4df0-ad4d-9cbd62d9763e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lm99v" [5e60596c-79c8-4df0-ad4d-9cbd62d9763e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.015878033s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-084205 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m6.797805895s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-h597b" [6c5473db-7b22-4e3f-b014-6b13cbdb749a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.056844041s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-084205 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-084205 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sv7jw" [59d91f5d-d8a1-4e5a-ae56-b4a2c17e255a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1006 03:18:59.903579 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/no-preload-808742/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-sv7jw" [59d91f5d-d8a1-4e5a-ae56-b4a2c17e255a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.015974645s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-084205 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (48.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (48.886682535s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (48.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-084205 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-084205 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4rcxp" [527455b1-9e78-489b-889f-187675713317] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4rcxp" [527455b1-9e78-489b-889f-187675713317] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.017999713s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-084205 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-084205 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-084205 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s2c54" [c7a1d904-7742-4928-bb6c-905b747ec7f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s2c54" [c7a1d904-7742-4928-bb6c-905b747ec7f4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.01195978s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.684856455s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (26.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-084205 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-084205 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.257270214s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-084205 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-084205 exec deployment/netcat -- nslookup kubernetes.default: (10.257440282s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (26.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1006 03:21:06.009718 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/auto-084205/client.crt: no such file or directory
E1006 03:21:06.014997 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/auto-084205/client.crt: no such file or directory
E1006 03:21:06.025356 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/auto-084205/client.crt: no such file or directory
E1006 03:21:06.046789 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/auto-084205/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1006 03:21:06.088664 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/auto-084205/client.crt: no such file or directory
E1006 03:21:06.168933 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/auto-084205/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1006 03:21:41.624099 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/functional-642904/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-084205 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m19.162551481s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5g48h" [345a2821-15ad-477f-b8e8-4b73cebf4e2a] Running
E1006 03:21:46.974962 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/auto-084205/client.crt: no such file or directory
E1006 03:21:47.554821 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/old-k8s-version-477664/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.042059577s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-084205 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-084205 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fpvsh" [f20c932e-2acd-48da-b88b-1ccf52c73e4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fpvsh" [f20c932e-2acd-48da-b88b-1ccf52c73e4b] Running
E1006 03:21:58.340168 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/default-k8s-diff-port-411157/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.012969231s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-084205 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-084205 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-084205 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gsrsd" [e69ea124-8600-4970-94fb-65485ca3e5e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gsrsd" [e69ea124-8600-4970-94fb-65485ca3e5e7] Running
E1006 03:23:00.726649 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/kindnet-084205/client.crt: no such file or directory
E1006 03:23:00.731876 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/kindnet-084205/client.crt: no such file or directory
E1006 03:23:00.742226 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/kindnet-084205/client.crt: no such file or directory
E1006 03:23:00.762631 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/kindnet-084205/client.crt: no such file or directory
E1006 03:23:00.802955 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/kindnet-084205/client.crt: no such file or directory
E1006 03:23:00.883323 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/kindnet-084205/client.crt: no such file or directory
E1006 03:23:01.043909 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/kindnet-084205/client.crt: no such file or directory
E1006 03:23:01.364480 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/kindnet-084205/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.011710002s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-084205 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-084205 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1006 03:23:02.005189 2268306 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/kindnet-084205/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (29/302)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.67s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-058597 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-058597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-058597
--- SKIP: TestDownloadOnlyKic (0.67s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-911165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-911165
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-084205 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-084205

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-084205

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-084205

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-084205

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-084205

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-084205

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-084205

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-084205

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-084205

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-084205

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-084205

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-084205" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-084205" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 06 Oct 2023 02:52:00 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-647181
contexts:
- context:
cluster: pause-647181
extensions:
- extension:
last-update: Fri, 06 Oct 2023 02:52:00 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-647181
name: pause-647181
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-647181
user:
client-certificate: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.crt
client-key: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-084205

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084205"

                                                
                                                
----------------------- debugLogs end: kubenet-084205 [took: 5.501373191s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-084205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-084205
--- SKIP: TestNetworkPlugins/group/kubenet (5.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-084205 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-084205" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17314-2262959/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 06 Oct 2023 02:52:00 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-647181
contexts:
- context:
cluster: pause-647181
extensions:
- extension:
last-update: Fri, 06 Oct 2023 02:52:00 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-647181
name: pause-647181
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-647181
user:
client-certificate: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.crt
client-key: /home/jenkins/minikube-integration/17314-2262959/.minikube/profiles/pause-647181/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-084205

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-084205" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084205"

                                                
                                                
----------------------- debugLogs end: cilium-084205 [took: 4.330901536s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-084205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-084205
--- SKIP: TestNetworkPlugins/group/cilium (4.58s)

                                                
                                    
Copied to clipboard