Test Report: Docker_Linux_crio_arm64 17731

                    
                      2299ceaec17b686deec86f12c40bdefcf1fe6842:2023-12-05:32161
                    
                

Test fail (7/315)

Order failed test Duration
35 TestAddons/parallel/Ingress 167.67
36 TestAddons/parallel/InspektorGadget 483.31
166 TestIngressAddonLegacy/serial/ValidateIngressAddons 175.01
216 TestMultiNode/serial/PingHostFrom2Pods 4.45
238 TestRunningBinaryUpgrade 73.34
241 TestMissingContainerUpgrade 185.25
253 TestStoppedBinaryUpgrade/Upgrade 79.37
x
+
TestAddons/parallel/Ingress (167.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-753790 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-753790 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-753790 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [203fd2c1-026b-4688-bd23-05fe904bbbfd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [203fd2c1-026b-4688-bd23-05fe904bbbfd] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.010136171s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-753790 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.166186585s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-753790 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.055879903s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-753790 addons disable ingress-dns --alsologtostderr -v=1: (1.140443586s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-753790 addons disable ingress --alsologtostderr -v=1: (7.727976339s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-753790
helpers_test.go:235: (dbg) docker inspect addons-753790:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db",
	        "Created": "2023-12-05T19:36:15.156368213Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8824,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-05T19:36:15.531511994Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4e0f3cc6f04c458835e9edb05d52f031520d40521bc3568d81cbb7c06a79ef2",
	        "ResolvConfPath": "/var/lib/docker/containers/9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db/hosts",
	        "LogPath": "/var/lib/docker/containers/9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db/9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db-json.log",
	        "Name": "/addons-753790",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-753790:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-753790",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3e1e23157cc6754d45f70c5c6f2cb6d4745d8cc057f46063f6d561e99db7ffd9-init/diff:/var/lib/docker/overlay2/ad36f68c22d2503e0656ab5d87c276f08a38342a08463cd6653b41bc4f40eea5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e1e23157cc6754d45f70c5c6f2cb6d4745d8cc057f46063f6d561e99db7ffd9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e1e23157cc6754d45f70c5c6f2cb6d4745d8cc057f46063f6d561e99db7ffd9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e1e23157cc6754d45f70c5c6f2cb6d4745d8cc057f46063f6d561e99db7ffd9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-753790",
	                "Source": "/var/lib/docker/volumes/addons-753790/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-753790",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-753790",
	                "name.minikube.sigs.k8s.io": "addons-753790",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "60cc3be46668aefc83a73c6402ade022263c2a9a54aee32a7268835b03965df3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/60cc3be46668",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-753790": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9a7b5170de31",
	                        "addons-753790"
	                    ],
	                    "NetworkID": "f3b232aa44038f4b7212bf899e0f8a0b2f47e0c09f356712e8e7c87ac892de44",
	                    "EndpointID": "cd31adb0fe2e816237bd965e85b85eb556b584348a5a83358190c6a1265e8736",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-753790 -n addons-753790
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-753790 logs -n 25: (1.538695281s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-855824   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | -p download-only-855824                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                                                           |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| delete  | -p download-only-855824                                                                     | download-only-855824   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| delete  | -p download-only-855824                                                                     | download-only-855824   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| start   | --download-only -p                                                                          | download-docker-224607 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | download-docker-224607                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-224607                                                                   | download-docker-224607 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-741946   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | binary-mirror-741946                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32795                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-741946                                                                     | binary-mirror-741946   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| addons  | disable dashboard -p                                                                        | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | addons-753790                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | addons-753790                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-753790 --wait=true                                                                | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-753790 ip                                                                            | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	| addons  | addons-753790 addons disable                                                                | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | -p addons-753790                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-753790 ssh cat                                                                       | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | /opt/local-path-provisioner/pvc-3d274b4a-eada-4209-8083-82421c6fefec_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-753790 addons disable                                                                | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:39 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-753790 addons                                                                        | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:39 UTC | 05 Dec 23 19:39 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-753790 addons                                                                        | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:39 UTC | 05 Dec 23 19:39 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:39 UTC | 05 Dec 23 19:39 UTC |
	|         | -p addons-753790                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:39 UTC | 05 Dec 23 19:39 UTC |
	|         | addons-753790                                                                               |                        |         |         |                     |                     |
	| addons  | addons-753790 addons                                                                        | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:39 UTC | 05 Dec 23 19:39 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-753790 ssh curl -s                                                                   | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:39 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-753790 ip                                                                            | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:42 UTC | 05 Dec 23 19:42 UTC |
	| addons  | addons-753790 addons disable                                                                | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:42 UTC | 05 Dec 23 19:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-753790 addons disable                                                                | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:42 UTC | 05 Dec 23 19:42 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:35:51
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:35:51.909842    8344 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:35:51.910009    8344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:51.910046    8344 out.go:309] Setting ErrFile to fd 2...
	I1205 19:35:51.910067    8344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:51.910315    8344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	I1205 19:35:51.910776    8344 out.go:303] Setting JSON to false
	I1205 19:35:51.911510    8344 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1098,"bootTime":1701803854,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 19:35:51.911604    8344 start.go:138] virtualization:  
	I1205 19:35:51.915582    8344 out.go:177] * [addons-753790] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1205 19:35:51.917528    8344 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:35:51.919482    8344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:35:51.917639    8344 notify.go:220] Checking for updates...
	I1205 19:35:51.923449    8344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 19:35:51.925354    8344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 19:35:51.927410    8344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 19:35:51.929186    8344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:35:51.931705    8344 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:35:51.954604    8344 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:35:51.954720    8344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:52.046437    8344 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-05 19:35:52.036331199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:35:52.046533    8344 docker.go:295] overlay module found
	I1205 19:35:52.050156    8344 out.go:177] * Using the docker driver based on user configuration
	I1205 19:35:52.052041    8344 start.go:298] selected driver: docker
	I1205 19:35:52.052059    8344 start.go:902] validating driver "docker" against <nil>
	I1205 19:35:52.052072    8344 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:35:52.052661    8344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:52.125509    8344 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-05 19:35:52.116544252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:35:52.125666    8344 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:35:52.125895    8344 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:35:52.127850    8344 out.go:177] * Using Docker driver with root privileges
	I1205 19:35:52.130019    8344 cni.go:84] Creating CNI manager for ""
	I1205 19:35:52.130037    8344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:35:52.130049    8344 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:35:52.130063    8344 start_flags.go:323] config:
	{Name:addons-753790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-753790 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:52.132303    8344 out.go:177] * Starting control plane node addons-753790 in cluster addons-753790
	I1205 19:35:52.134016    8344 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:35:52.135942    8344 out.go:177] * Pulling base image ...
	I1205 19:35:52.137764    8344 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:35:52.137921    8344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:52.137949    8344 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1205 19:35:52.137960    8344 cache.go:56] Caching tarball of preloaded images
	I1205 19:35:52.138020    8344 preload.go:174] Found /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1205 19:35:52.138036    8344 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 19:35:52.138365    8344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/config.json ...
	I1205 19:35:52.138394    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/config.json: {Name:mkcffab7f9f6129a33892e5ab8934455fae325aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:52.154734    8344 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1205 19:35:52.154856    8344 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1205 19:35:52.154877    8344 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory, skipping pull
	I1205 19:35:52.154883    8344 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in cache, skipping pull
	I1205 19:35:52.154893    8344 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	I1205 19:35:52.154901    8344 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f from local cache
	I1205 19:36:07.569679    8344 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f from cached tarball
	I1205 19:36:07.569718    8344 cache.go:194] Successfully downloaded all kic artifacts
	I1205 19:36:07.569780    8344 start.go:365] acquiring machines lock for addons-753790: {Name:mk0a3aaca0e4c76f2f889d779e8013d626af074e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:36:07.569887    8344 start.go:369] acquired machines lock for "addons-753790" in 83.668µs
	I1205 19:36:07.569917    8344 start.go:93] Provisioning new machine with config: &{Name:addons-753790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-753790 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:36:07.569999    8344 start.go:125] createHost starting for "" (driver="docker")
	I1205 19:36:07.572546    8344 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1205 19:36:07.572775    8344 start.go:159] libmachine.API.Create for "addons-753790" (driver="docker")
	I1205 19:36:07.572822    8344 client.go:168] LocalClient.Create starting
	I1205 19:36:07.572915    8344 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem
	I1205 19:36:08.280514    8344 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem
	I1205 19:36:08.423701    8344 cli_runner.go:164] Run: docker network inspect addons-753790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 19:36:08.441711    8344 cli_runner.go:211] docker network inspect addons-753790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 19:36:08.441785    8344 network_create.go:281] running [docker network inspect addons-753790] to gather additional debugging logs...
	I1205 19:36:08.441804    8344 cli_runner.go:164] Run: docker network inspect addons-753790
	W1205 19:36:08.458312    8344 cli_runner.go:211] docker network inspect addons-753790 returned with exit code 1
	I1205 19:36:08.458347    8344 network_create.go:284] error running [docker network inspect addons-753790]: docker network inspect addons-753790: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-753790 not found
	I1205 19:36:08.458360    8344 network_create.go:286] output of [docker network inspect addons-753790]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-753790 not found
	
	** /stderr **
	I1205 19:36:08.458472    8344 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:36:08.475034    8344 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025835a0}
	I1205 19:36:08.475069    8344 network_create.go:124] attempt to create docker network addons-753790 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 19:36:08.475125    8344 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-753790 addons-753790
	I1205 19:36:08.537915    8344 network_create.go:108] docker network addons-753790 192.168.49.0/24 created
	I1205 19:36:08.537947    8344 kic.go:121] calculated static IP "192.168.49.2" for the "addons-753790" container
	I1205 19:36:08.538027    8344 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 19:36:08.555159    8344 cli_runner.go:164] Run: docker volume create addons-753790 --label name.minikube.sigs.k8s.io=addons-753790 --label created_by.minikube.sigs.k8s.io=true
	I1205 19:36:08.574930    8344 oci.go:103] Successfully created a docker volume addons-753790
	I1205 19:36:08.575012    8344 cli_runner.go:164] Run: docker run --rm --name addons-753790-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-753790 --entrypoint /usr/bin/test -v addons-753790:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib
	I1205 19:36:10.808501    8344 cli_runner.go:217] Completed: docker run --rm --name addons-753790-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-753790 --entrypoint /usr/bin/test -v addons-753790:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib: (2.23344409s)
	I1205 19:36:10.808530    8344 oci.go:107] Successfully prepared a docker volume addons-753790
	I1205 19:36:10.808561    8344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:36:10.808583    8344 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 19:36:10.808664    8344 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-753790:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 19:36:15.043741    8344 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-753790:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir: (4.235039179s)
	I1205 19:36:15.043785    8344 kic.go:203] duration metric: took 4.235201 seconds to extract preloaded images to volume
	W1205 19:36:15.043941    8344 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 19:36:15.044126    8344 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 19:36:15.140282    8344 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-753790 --name addons-753790 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-753790 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-753790 --network addons-753790 --ip 192.168.49.2 --volume addons-753790:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 19:36:15.539910    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Running}}
	I1205 19:36:15.562980    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:15.591613    8344 cli_runner.go:164] Run: docker exec addons-753790 stat /var/lib/dpkg/alternatives/iptables
	I1205 19:36:15.662765    8344 oci.go:144] the created container "addons-753790" has a running status.
	I1205 19:36:15.662793    8344 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa...
	I1205 19:36:16.048771    8344 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 19:36:16.085504    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:16.115898    8344 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 19:36:16.115918    8344 kic_runner.go:114] Args: [docker exec --privileged addons-753790 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 19:36:16.199036    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:16.234930    8344 machine.go:88] provisioning docker machine ...
	I1205 19:36:16.234969    8344 ubuntu.go:169] provisioning hostname "addons-753790"
	I1205 19:36:16.235028    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:16.268003    8344 main.go:141] libmachine: Using SSH client type: native
	I1205 19:36:16.268417    8344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1205 19:36:16.268436    8344 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-753790 && echo "addons-753790" | sudo tee /etc/hostname
	I1205 19:36:16.271388    8344 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51844->127.0.0.1:32772: read: connection reset by peer
	I1205 19:36:19.437721    8344 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-753790
	
	I1205 19:36:19.437803    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:19.456804    8344 main.go:141] libmachine: Using SSH client type: native
	I1205 19:36:19.457216    8344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1205 19:36:19.457239    8344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-753790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-753790/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-753790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:36:19.604611    8344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:36:19.604635    8344 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-2478/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-2478/.minikube}
	I1205 19:36:19.604664    8344 ubuntu.go:177] setting up certificates
	I1205 19:36:19.604673    8344 provision.go:83] configureAuth start
	I1205 19:36:19.604737    8344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-753790
	I1205 19:36:19.623116    8344 provision.go:138] copyHostCerts
	I1205 19:36:19.623198    8344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem (1123 bytes)
	I1205 19:36:19.623311    8344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem (1679 bytes)
	I1205 19:36:19.623388    8344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem (1078 bytes)
	I1205 19:36:19.623441    8344 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem org=jenkins.addons-753790 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-753790]
	I1205 19:36:20.535186    8344 provision.go:172] copyRemoteCerts
	I1205 19:36:20.535274    8344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:36:20.535318    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:20.553184    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:20.657812    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:36:20.684920    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 19:36:20.713418    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:36:20.740255    8344 provision.go:86] duration metric: configureAuth took 1.1355691s
	I1205 19:36:20.740279    8344 ubuntu.go:193] setting minikube options for container-runtime
	I1205 19:36:20.740470    8344 config.go:182] Loaded profile config "addons-753790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:20.740577    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:20.758917    8344 main.go:141] libmachine: Using SSH client type: native
	I1205 19:36:20.759320    8344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1205 19:36:20.759341    8344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:36:21.029951    8344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:36:21.029976    8344 machine.go:91] provisioned docker machine in 4.795023381s
	I1205 19:36:21.029986    8344 client.go:171] LocalClient.Create took 13.457153178s
	I1205 19:36:21.029999    8344 start.go:167] duration metric: libmachine.API.Create for "addons-753790" took 13.457223356s
	I1205 19:36:21.030007    8344 start.go:300] post-start starting for "addons-753790" (driver="docker")
	I1205 19:36:21.030016    8344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:36:21.030080    8344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:36:21.030124    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:21.053331    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:21.158289    8344 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:36:21.162257    8344 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 19:36:21.162292    8344 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 19:36:21.162303    8344 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 19:36:21.162316    8344 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1205 19:36:21.162326    8344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/addons for local assets ...
	I1205 19:36:21.162395    8344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/files for local assets ...
	I1205 19:36:21.162422    8344 start.go:303] post-start completed in 132.409818ms
	I1205 19:36:21.162718    8344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-753790
	I1205 19:36:21.179829    8344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/config.json ...
	I1205 19:36:21.180095    8344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:36:21.180158    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:21.200386    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:21.301542    8344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 19:36:21.306886    8344 start.go:128] duration metric: createHost completed in 13.736873202s
	I1205 19:36:21.306907    8344 start.go:83] releasing machines lock for "addons-753790", held for 13.737006948s
	I1205 19:36:21.306981    8344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-753790
	I1205 19:36:21.324082    8344 ssh_runner.go:195] Run: cat /version.json
	I1205 19:36:21.324134    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:21.324201    8344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:36:21.324266    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:21.342929    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:21.360561    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:21.575511    8344 ssh_runner.go:195] Run: systemctl --version
	I1205 19:36:21.580777    8344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:36:21.725908    8344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 19:36:21.731276    8344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:36:21.753568    8344 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 19:36:21.753646    8344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:36:21.785870    8344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 19:36:21.785897    8344 start.go:475] detecting cgroup driver to use...
	I1205 19:36:21.785928    8344 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 19:36:21.785978    8344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:36:21.803586    8344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:36:21.816451    8344 docker.go:203] disabling cri-docker service (if available) ...
	I1205 19:36:21.816551    8344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:36:21.832124    8344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:36:21.848559    8344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:36:21.945385    8344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:36:22.041922    8344 docker.go:219] disabling docker service ...
	I1205 19:36:22.042028    8344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:36:22.062253    8344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:36:22.075461    8344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:36:22.165581    8344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:36:22.266853    8344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:36:22.279179    8344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:36:22.297631    8344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 19:36:22.297696    8344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:22.308732    8344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:36:22.308793    8344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:22.319615    8344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:22.330573    8344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:22.341251    8344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:36:22.351413    8344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:36:22.360976    8344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:36:22.370263    8344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:36:22.454601    8344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:36:22.566535    8344 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:36:22.566610    8344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:36:22.571088    8344 start.go:543] Will wait 60s for crictl version
	I1205 19:36:22.571144    8344 ssh_runner.go:195] Run: which crictl
	I1205 19:36:22.575134    8344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:36:22.613738    8344 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 19:36:22.613865    8344 ssh_runner.go:195] Run: crio --version
	I1205 19:36:22.659470    8344 ssh_runner.go:195] Run: crio --version
	I1205 19:36:22.709825    8344 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1205 19:36:22.712147    8344 cli_runner.go:164] Run: docker network inspect addons-753790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:36:22.729047    8344 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 19:36:22.733374    8344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:36:22.746060    8344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:36:22.746127    8344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:36:22.811655    8344 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 19:36:22.811676    8344 crio.go:415] Images already preloaded, skipping extraction
	I1205 19:36:22.811730    8344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:36:22.851094    8344 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 19:36:22.851117    8344 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:36:22.851188    8344 ssh_runner.go:195] Run: crio config
	I1205 19:36:22.919605    8344 cni.go:84] Creating CNI manager for ""
	I1205 19:36:22.919625    8344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:36:22.919670    8344 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 19:36:22.919696    8344 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-753790 NodeName:addons-753790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:36:22.919881    8344 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-753790"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:36:22.919960    8344 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-753790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-753790 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 19:36:22.920043    8344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 19:36:22.930121    8344 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:36:22.930225    8344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:36:22.940016    8344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1205 19:36:22.959618    8344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:36:22.979784    8344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1205 19:36:22.999258    8344 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 19:36:23.003544    8344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:36:23.016180    8344 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790 for IP: 192.168.49.2
	I1205 19:36:23.016209    8344 certs.go:190] acquiring lock for shared ca certs: {Name:mk8ef93a51958e82275f202c3866b092b6aa4ced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:23.016349    8344 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key
	I1205 19:36:23.389384    8344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt ...
	I1205 19:36:23.389410    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt: {Name:mk6803fcf95b12ed9d9ed71b2ebfb52226bf7c74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:23.389609    8344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key ...
	I1205 19:36:23.389623    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key: {Name:mkf92cda3b17c7b2bc3ea5041c219bff8618a437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:23.389708    8344 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key
	I1205 19:36:24.172249    8344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.crt ...
	I1205 19:36:24.172277    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.crt: {Name:mkf1ad06a6ca45c538781f7e4d8156ae9ea85689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.172453    8344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key ...
	I1205 19:36:24.172466    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key: {Name:mk976d49e1c41f0b574101fa3b655a03410a7360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.172578    8344 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.key
	I1205 19:36:24.172594    8344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt with IP's: []
	I1205 19:36:24.292230    8344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt ...
	I1205 19:36:24.292256    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: {Name:mkd9b024028d488e95b01d4658c8d526a9df083f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.292434    8344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.key ...
	I1205 19:36:24.292449    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.key: {Name:mk131c0bce6aa9cc9a0c7550e2f58984bfefb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.292530    8344 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.key.dd3b5fb2
	I1205 19:36:24.292551    8344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 19:36:24.453793    8344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.crt.dd3b5fb2 ...
	I1205 19:36:24.453819    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.crt.dd3b5fb2: {Name:mkd96a5ba477f7ac61b1220d340ee67fbb940da6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.453987    8344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.key.dd3b5fb2 ...
	I1205 19:36:24.454001    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.key.dd3b5fb2: {Name:mk711b2e84591f91a1f001e8b533ea6bab25c4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.454080    8344 certs.go:337] copying /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.crt
	I1205 19:36:24.454152    8344 certs.go:341] copying /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.key
	I1205 19:36:24.454203    8344 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.key
	I1205 19:36:24.454221    8344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.crt with IP's: []
	I1205 19:36:24.902717    8344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.crt ...
	I1205 19:36:24.902747    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.crt: {Name:mkcba8d9fa774f098c79875bed9c742ae22282fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.902919    8344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.key ...
	I1205 19:36:24.902931    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.key: {Name:mk0d46cbd2a8515c1022cefd060b5673f2a88244 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.903113    8344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:36:24.903153    8344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:36:24.903182    8344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:36:24.903212    8344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem (1679 bytes)
	I1205 19:36:24.903850    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 19:36:24.930857    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:36:24.958111    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:36:24.984790    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 19:36:25.012074    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:36:25.040546    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 19:36:25.068236    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:36:25.096819    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 19:36:25.123570    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:36:25.150914    8344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:36:25.170992    8344 ssh_runner.go:195] Run: openssl version
	I1205 19:36:25.177662    8344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:36:25.188661    8344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:36:25.193078    8344 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:36:25.193170    8344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:36:25.201115    8344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:36:25.211965    8344 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 19:36:25.216072    8344 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 19:36:25.216156    8344 kubeadm.go:404] StartCluster: {Name:addons-753790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-753790 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:36:25.216246    8344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:36:25.216337    8344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:36:25.256960    8344 cri.go:89] found id: ""
	I1205 19:36:25.257062    8344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:36:25.267392    8344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:36:25.277574    8344 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1205 19:36:25.277662    8344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:36:25.287637    8344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:36:25.287713    8344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 19:36:25.337992    8344 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 19:36:25.338274    8344 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 19:36:25.390401    8344 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1205 19:36:25.390472    8344 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1205 19:36:25.390511    8344 kubeadm.go:322] OS: Linux
	I1205 19:36:25.390569    8344 kubeadm.go:322] CGROUPS_CPU: enabled
	I1205 19:36:25.390628    8344 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1205 19:36:25.390684    8344 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1205 19:36:25.390735    8344 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1205 19:36:25.390786    8344 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1205 19:36:25.390845    8344 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1205 19:36:25.390892    8344 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1205 19:36:25.390945    8344 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1205 19:36:25.390992    8344 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1205 19:36:25.469216    8344 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:36:25.469323    8344 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:36:25.469415    8344 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 19:36:25.719941    8344 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:36:25.723291    8344 out.go:204]   - Generating certificates and keys ...
	I1205 19:36:25.723416    8344 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 19:36:25.723497    8344 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 19:36:26.279689    8344 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:36:26.396288    8344 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:36:27.615626    8344 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:36:27.934993    8344 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 19:36:28.112549    8344 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 19:36:28.112927    8344 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-753790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:36:28.264453    8344 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 19:36:28.264832    8344 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-753790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:36:28.585575    8344 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:36:28.952969    8344 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:36:29.084120    8344 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 19:36:29.084478    8344 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:36:29.577633    8344 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:36:31.045477    8344 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:36:31.640178    8344 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:36:31.953198    8344 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:36:31.954279    8344 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:36:31.957560    8344 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:36:31.960020    8344 out.go:204]   - Booting up control plane ...
	I1205 19:36:31.960144    8344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:36:31.960217    8344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:36:31.961219    8344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:36:31.970896    8344 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:36:31.971948    8344 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:36:31.972197    8344 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 19:36:32.059165    8344 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 19:36:39.061230    8344 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002108 seconds
	I1205 19:36:39.061348    8344 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:36:39.092658    8344 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:36:39.618390    8344 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:36:39.618572    8344 kubeadm.go:322] [mark-control-plane] Marking the node addons-753790 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:36:40.130227    8344 kubeadm.go:322] [bootstrap-token] Using token: idz0tv.fy35j0upqrlrbzb1
	I1205 19:36:40.132194    8344 out.go:204]   - Configuring RBAC rules ...
	I1205 19:36:40.132311    8344 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:36:40.138543    8344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:36:40.146147    8344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:36:40.149501    8344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:36:40.152720    8344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:36:40.157007    8344 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:36:40.172376    8344 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:36:40.409341    8344 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 19:36:40.559086    8344 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 19:36:40.559103    8344 kubeadm.go:322] 
	I1205 19:36:40.559160    8344 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 19:36:40.559165    8344 kubeadm.go:322] 
	I1205 19:36:40.559236    8344 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 19:36:40.559242    8344 kubeadm.go:322] 
	I1205 19:36:40.559266    8344 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 19:36:40.559321    8344 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:36:40.559368    8344 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:36:40.559373    8344 kubeadm.go:322] 
	I1205 19:36:40.559423    8344 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 19:36:40.559428    8344 kubeadm.go:322] 
	I1205 19:36:40.559472    8344 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:36:40.559477    8344 kubeadm.go:322] 
	I1205 19:36:40.559525    8344 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 19:36:40.559596    8344 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:36:40.559667    8344 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:36:40.559673    8344 kubeadm.go:322] 
	I1205 19:36:40.559750    8344 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:36:40.559834    8344 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 19:36:40.559841    8344 kubeadm.go:322] 
	I1205 19:36:40.559920    8344 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token idz0tv.fy35j0upqrlrbzb1 \
	I1205 19:36:40.560016    8344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 \
	I1205 19:36:40.560035    8344 kubeadm.go:322] 	--control-plane 
	I1205 19:36:40.560039    8344 kubeadm.go:322] 
	I1205 19:36:40.560118    8344 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:36:40.560123    8344 kubeadm.go:322] 
	I1205 19:36:40.560199    8344 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token idz0tv.fy35j0upqrlrbzb1 \
	I1205 19:36:40.560294    8344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 
	I1205 19:36:40.563482    8344 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1205 19:36:40.563590    8344 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:36:40.563604    8344 cni.go:84] Creating CNI manager for ""
	I1205 19:36:40.563611    8344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:36:40.567143    8344 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:36:40.569084    8344 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:36:40.585046    8344 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 19:36:40.585064    8344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 19:36:40.640240    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 19:36:41.480024    8344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:36:41.480156    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:41.480229    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=addons-753790 minikube.k8s.io/updated_at=2023_12_05T19_36_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:41.659525    8344 ops.go:34] apiserver oom_adj: -16
	I1205 19:36:41.659606    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:41.754575    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:42.345980    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:42.845792    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:43.345697    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:43.845314    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:44.345894    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:44.845442    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:45.345555    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:45.846208    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:46.345742    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:46.845337    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:47.345379    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:47.845978    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:48.345279    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:48.845413    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:49.345811    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:49.845650    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:50.345864    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:50.845611    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:51.345825    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:51.846171    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:52.345984    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:52.845732    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:53.015971    8344 kubeadm.go:1088] duration metric: took 11.535858434s to wait for elevateKubeSystemPrivileges.
	I1205 19:36:53.015995    8344 kubeadm.go:406] StartCluster complete in 27.799841691s
	I1205 19:36:53.016011    8344 settings.go:142] acquiring lock: {Name:mk9158e056caaf62837361622cedbf37e18c3f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:53.016119    8344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 19:36:53.016494    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/kubeconfig: {Name:mka2e3e3347ae085678ba2bb20225628c9c86ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:53.016766    8344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:36:53.016791    8344 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1205 19:36:53.016882    8344 addons.go:69] Setting volumesnapshots=true in profile "addons-753790"
	I1205 19:36:53.016900    8344 addons.go:231] Setting addon volumesnapshots=true in "addons-753790"
	I1205 19:36:53.016956    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.017025    8344 config.go:182] Loaded profile config "addons-753790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:53.017069    8344 addons.go:69] Setting ingress-dns=true in profile "addons-753790"
	I1205 19:36:53.017080    8344 addons.go:231] Setting addon ingress-dns=true in "addons-753790"
	I1205 19:36:53.017126    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.017411    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.017503    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.017905    8344 addons.go:69] Setting inspektor-gadget=true in profile "addons-753790"
	I1205 19:36:53.017926    8344 addons.go:231] Setting addon inspektor-gadget=true in "addons-753790"
	I1205 19:36:53.017964    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.018357    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.018462    8344 addons.go:69] Setting cloud-spanner=true in profile "addons-753790"
	I1205 19:36:53.018474    8344 addons.go:231] Setting addon cloud-spanner=true in "addons-753790"
	I1205 19:36:53.018508    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.018892    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.021226    8344 addons.go:69] Setting metrics-server=true in profile "addons-753790"
	I1205 19:36:53.021252    8344 addons.go:231] Setting addon metrics-server=true in "addons-753790"
	I1205 19:36:53.021293    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.021703    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.023633    8344 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-753790"
	I1205 19:36:53.023684    8344 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-753790"
	I1205 19:36:53.023721    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.024155    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.034230    8344 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-753790"
	I1205 19:36:53.034310    8344 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-753790"
	I1205 19:36:53.034397    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.034926    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.051724    8344 addons.go:69] Setting registry=true in profile "addons-753790"
	I1205 19:36:53.051832    8344 addons.go:231] Setting addon registry=true in "addons-753790"
	I1205 19:36:53.051909    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.051930    8344 addons.go:69] Setting gcp-auth=true in profile "addons-753790"
	I1205 19:36:53.051958    8344 mustload.go:65] Loading cluster: addons-753790
	I1205 19:36:53.052153    8344 config.go:182] Loaded profile config "addons-753790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:53.052387    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.052499    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.066942    8344 addons.go:69] Setting ingress=true in profile "addons-753790"
	I1205 19:36:53.066977    8344 addons.go:231] Setting addon ingress=true in "addons-753790"
	I1205 19:36:53.067040    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.067530    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.051921    8344 addons.go:69] Setting default-storageclass=true in profile "addons-753790"
	I1205 19:36:53.071013    8344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-753790"
	I1205 19:36:53.198610    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.071052    8344 addons.go:69] Setting storage-provisioner=true in profile "addons-753790"
	I1205 19:36:53.238605    8344 addons.go:231] Setting addon storage-provisioner=true in "addons-753790"
	I1205 19:36:53.238701    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.239169    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.258520    8344 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1205 19:36:53.286505    8344 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1205 19:36:53.071064    8344 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-753790"
	I1205 19:36:53.277327    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.288143    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 19:36:53.288176    8344 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-753790"
	I1205 19:36:53.290082    8344 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1205 19:36:53.290089    8344 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1205 19:36:53.299007    8344 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:36:53.299872    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 19:36:53.299928    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.300376    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.311837    8344 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 19:36:53.311862    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 19:36:53.311911    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.314062    8344 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 19:36:53.314077    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 19:36:53.314131    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.324855    8344 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-753790" context rescaled to 1 replicas
	I1205 19:36:53.324898    8344 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:36:53.327090    8344 out.go:177] * Verifying Kubernetes components...
	I1205 19:36:53.299839    8344 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1205 19:36:53.299791    8344 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1205 19:36:53.299799    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 19:36:53.331980    8344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:36:53.331987    8344 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1205 19:36:53.331994    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1205 19:36:53.333104    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.336369    8344 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1205 19:36:53.341218    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 19:36:53.341278    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 19:36:53.349446    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.351573    8344 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:53.349753    8344 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:36:53.370624    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 19:36:53.353882    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 19:36:53.365834    8344 addons.go:231] Setting addon default-storageclass=true in "addons-753790"
	I1205 19:36:53.381945    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.382452    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.387024    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 19:36:53.392558    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 19:36:53.399848    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 19:36:53.401925    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 19:36:53.401190    8344 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:53.401259    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.406414    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 19:36:53.404178    8344 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 19:36:53.410629    8344 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1205 19:36:53.408557    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 19:36:53.408817    8344 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:36:53.412685    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1205 19:36:53.412783    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.415188    8344 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 19:36:53.415206    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1205 19:36:53.415296    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.431591    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 19:36:53.431662    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.454382    8344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:36:53.455384    8344 node_ready.go:35] waiting up to 6m0s for node "addons-753790" to be "Ready" ...
	I1205 19:36:53.471641    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.486840    8344 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-753790"
	I1205 19:36:53.486881    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.487323    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.538274    8344 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:36:53.533995    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.544237    8344 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:36:53.544263    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:36:53.544327    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.562793    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.577784    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.593529    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.634984    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.638075    8344 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:36:53.638096    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:36:53.638153    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.643657    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.705147    8344 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 19:36:53.700262    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.702618    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.711826    8344 out.go:177]   - Using image docker.io/busybox:stable
	I1205 19:36:53.718432    8344 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:36:53.718452    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 19:36:53.718515    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.718927    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.729152    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.751994    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.974061    8344 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1205 19:36:53.974084    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1205 19:36:54.010474    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:36:54.010978    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 19:36:54.039576    8344 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1205 19:36:54.039601    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1205 19:36:54.130509    8344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 19:36:54.130555    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 19:36:54.136106    8344 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1205 19:36:54.136127    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1205 19:36:54.139680    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 19:36:54.139699    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 19:36:54.143183    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:36:54.148826    8344 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 19:36:54.148847    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 19:36:54.154199    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:36:54.227614    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:36:54.228402    8344 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 19:36:54.228420    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 19:36:54.234902    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:36:54.238716    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:36:54.305305    8344 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 19:36:54.305334    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 19:36:54.310173    8344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 19:36:54.310191    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 19:36:54.316836    8344 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1205 19:36:54.316857    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1205 19:36:54.355596    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 19:36:54.355666    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 19:36:54.406360    8344 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:36:54.406425    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 19:36:54.446485    8344 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 19:36:54.446554    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 19:36:54.469303    8344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:36:54.469373    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 19:36:54.495002    8344 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1205 19:36:54.495070    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1205 19:36:54.534300    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 19:36:54.534369    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 19:36:54.603000    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:36:54.613746    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 19:36:54.613811    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 19:36:54.657897    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:36:54.703387    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 19:36:54.703455    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 19:36:54.705863    8344 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 19:36:54.705910    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1205 19:36:54.794003    8344 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:54.794064    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 19:36:54.850759    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 19:36:54.850819    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 19:36:54.901870    8344 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1205 19:36:54.901937    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1205 19:36:54.915538    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:54.960643    8344 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 19:36:54.960710    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 19:36:55.053432    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1205 19:36:55.084367    8344 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 19:36:55.084439    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 19:36:55.126777    8344 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.672362614s)
	I1205 19:36:55.126866    8344 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 19:36:55.227306    8344 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 19:36:55.227376    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 19:36:55.417582    8344 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 19:36:55.417649    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 19:36:55.582416    8344 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:36:55.582486    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 19:36:55.689677    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:36:55.812263    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:36:57.687970    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.67745176s)
	I1205 19:36:57.688027    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.677030226s)
	I1205 19:36:57.688060    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.544857618s)
	I1205 19:36:57.688220    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.534001062s)
	I1205 19:36:58.100077    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:36:58.127272    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.899529492s)
	I1205 19:36:58.192577    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.957641134s)
	I1205 19:36:58.834173    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.595408762s)
	I1205 19:36:58.834205    8344 addons.go:467] Verifying addon ingress=true in "addons-753790"
	I1205 19:36:58.834279    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.231213219s)
	I1205 19:36:58.834296    8344 addons.go:467] Verifying addon registry=true in "addons-753790"
	I1205 19:36:58.836914    8344 out.go:177] * Verifying ingress addon...
	I1205 19:36:58.834699    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.176720141s)
	I1205 19:36:58.834803    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.919200328s)
	I1205 19:36:58.834849    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.781348647s)
	I1205 19:36:58.838957    8344 out.go:177] * Verifying registry addon...
	I1205 19:36:58.841981    8344 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 19:36:58.839195    8344 addons.go:467] Verifying addon metrics-server=true in "addons-753790"
	W1205 19:36:58.839219    8344 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:36:58.842156    8344 retry.go:31] will retry after 262.635693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:36:58.839974    8344 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 19:36:58.849967    8344 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 19:36:58.849993    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:58.854209    8344 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:36:58.854231    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:58.859196    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:58.862650    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.105294    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:59.135379    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.445605641s)
	I1205 19:36:59.135426    8344 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-753790"
	I1205 19:36:59.137629    8344 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 19:36:59.141242    8344 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 19:36:59.150907    8344 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:36:59.150928    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.160832    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.367272    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.385771    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.672627    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.863955    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.867451    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.109246    8344 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 19:37:00.109346    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:37:00.143350    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:37:00.165717    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:00.366015    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:00.378185    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.406114    8344 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 19:37:00.429133    8344 addons.go:231] Setting addon gcp-auth=true in "addons-753790"
	I1205 19:37:00.429188    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:37:00.429683    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:37:00.450680    8344 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 19:37:00.450734    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:37:00.491916    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:37:00.576724    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:00.674459    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:00.762291    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.656954891s)
	I1205 19:37:00.765913    8344 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:37:00.768092    8344 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1205 19:37:00.770074    8344 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 19:37:00.770096    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 19:37:00.846964    8344 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 19:37:00.846990    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 19:37:00.866449    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:00.870852    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.908822    8344 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:37:00.908844    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1205 19:37:00.962578    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:37:01.178048    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:01.382728    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:01.383613    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:01.666352    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:01.864124    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:01.873663    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:02.189547    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:02.264409    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.301791802s)
	I1205 19:37:02.267258    8344 addons.go:467] Verifying addon gcp-auth=true in "addons-753790"
	I1205 19:37:02.271081    8344 out.go:177] * Verifying gcp-auth addon...
	I1205 19:37:02.276105    8344 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 19:37:02.297906    8344 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 19:37:02.297930    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.306026    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.363885    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:02.367229    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:02.665814    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:02.810351    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.865131    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:02.872830    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:03.051178    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:03.168531    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:03.311377    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:03.364762    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:03.368706    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:03.667098    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:03.810664    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:03.863256    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:03.866790    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:04.165802    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:04.311362    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:04.364740    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:04.366435    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:04.665562    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:04.809472    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:04.863361    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:04.869107    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:05.051514    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:05.165251    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.309600    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:05.362996    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:05.366278    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:05.665972    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.809280    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:05.863679    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:05.866226    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.165236    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:06.309564    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:06.363048    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:06.367440    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.665572    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:06.809866    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:06.863245    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:06.866062    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:07.165176    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:07.309627    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:07.363406    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:07.366226    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:07.551042    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:07.665356    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:07.809782    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:07.863683    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:07.866894    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.165063    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:08.309370    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.363372    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.366762    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.665060    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:08.810218    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.863225    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.866158    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:09.165303    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:09.309697    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:09.363339    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:09.366447    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:09.551347    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:09.664929    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:09.810103    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:09.864035    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:09.867062    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:10.165171    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:10.309618    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.363077    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:10.366117    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:10.665246    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:10.809547    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.863615    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:10.867034    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:11.165237    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:11.309582    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:11.363430    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:11.366267    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:11.551627    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:11.665324    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:11.810111    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:11.864045    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:11.866036    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:12.166450    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:12.309366    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:12.365035    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:12.366919    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:12.665049    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:12.809890    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:12.863914    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:12.867002    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:13.165485    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:13.309278    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.363236    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.366468    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:13.551892    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:13.665247    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:13.809675    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.863618    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.866986    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:14.165401    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.309731    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:14.364093    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:14.366686    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:14.664881    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.809138    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:14.863720    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:14.865908    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:15.165933    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:15.309915    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:15.364086    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:15.366129    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:15.554425    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:15.665699    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:15.809454    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:15.864130    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:15.867071    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:16.165310    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:16.309994    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:16.363680    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:16.366522    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:16.665719    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:16.810082    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:16.863699    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:16.867100    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:17.165450    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:17.309255    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:17.364016    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.365861    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:17.665270    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:17.809872    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:17.866544    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.866948    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:18.054449    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:18.165879    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:18.309369    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:18.364262    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:18.366224    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:18.665532    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:18.809358    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:18.864200    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:18.866758    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:19.164811    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:19.309859    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.363198    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:19.366257    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:19.665775    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:19.815896    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.863175    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:19.866267    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:20.165905    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:20.309676    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:20.363825    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:20.366806    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:20.551910    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:20.665282    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:20.810076    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:20.864832    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:20.866556    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:21.164985    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:21.309757    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:21.363171    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:21.366273    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:21.665706    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:21.809831    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:21.863833    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:21.865955    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:22.165638    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:22.309706    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:22.363510    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:22.366838    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:22.664850    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:22.809783    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:22.863411    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:22.866558    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:23.051527    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:23.165721    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:23.309622    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:23.363775    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:23.365842    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:23.680285    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:23.812433    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:23.868045    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:23.868881    8344 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:37:23.868924    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:24.071095    8344 node_ready.go:49] node "addons-753790" has status "Ready":"True"
	I1205 19:37:24.071158    8344 node_ready.go:38] duration metric: took 30.615748861s waiting for node "addons-753790" to be "Ready" ...
	I1205 19:37:24.071182    8344 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:37:24.091604    8344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rmhkn" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:24.173378    8344 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:37:24.173443    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:24.310559    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:24.368571    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:24.371396    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:24.667349    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:24.812560    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:24.871032    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:24.871976    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:25.167256    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:25.310837    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:25.373707    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:25.374682    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:25.667096    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:25.816105    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:25.865297    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:25.872464    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:26.125615    8344 pod_ready.go:92] pod "coredns-5dd5756b68-rmhkn" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:26.125691    8344 pod_ready.go:81] duration metric: took 2.033978619s waiting for pod "coredns-5dd5756b68-rmhkn" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.125727    8344 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.133633    8344 pod_ready.go:92] pod "etcd-addons-753790" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:26.133708    8344 pod_ready.go:81] duration metric: took 7.946161ms waiting for pod "etcd-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.133748    8344 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.142849    8344 pod_ready.go:92] pod "kube-apiserver-addons-753790" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:26.142922    8344 pod_ready.go:81] duration metric: took 9.149041ms waiting for pod "kube-apiserver-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.142947    8344 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.153050    8344 pod_ready.go:92] pod "kube-controller-manager-addons-753790" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:26.153115    8344 pod_ready.go:81] duration metric: took 10.148188ms waiting for pod "kube-controller-manager-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.153157    8344 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8xqms" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.167305    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:26.309355    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:26.363340    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:26.367361    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:26.452623    8344 pod_ready.go:92] pod "kube-proxy-8xqms" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:26.452652    8344 pod_ready.go:81] duration metric: took 299.47191ms waiting for pod "kube-proxy-8xqms" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.452663    8344 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.667041    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:26.816414    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:26.854672    8344 pod_ready.go:92] pod "kube-scheduler-addons-753790" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:26.854696    8344 pod_ready.go:81] duration metric: took 402.024647ms waiting for pod "kube-scheduler-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.854707    8344 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.865690    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:26.870437    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:27.166835    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:27.309874    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:27.363397    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:27.367502    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:27.666402    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:27.815215    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:27.863372    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:27.866992    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:28.165895    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:28.310378    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:28.364918    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:28.372439    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:28.669691    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:28.811351    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:28.864735    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:28.869161    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:29.160005    8344 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:29.166263    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:29.310220    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:29.364383    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:29.368852    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:29.667781    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:29.811272    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:29.863795    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:29.867789    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:30.167992    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:30.310310    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:30.364582    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:30.370500    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:30.670421    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:30.810602    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:30.867214    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:30.870356    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:31.189973    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:31.311068    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:31.366057    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:31.367871    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:31.660426    8344 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:31.669837    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:31.811238    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:31.865726    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:31.869712    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:32.167000    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:32.309658    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:32.371405    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:32.374362    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:32.677532    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:32.809964    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:32.869913    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:32.871278    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:33.166242    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:33.309582    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:33.363678    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:33.366628    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:33.666477    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:33.810516    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:33.864076    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:33.868633    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:34.161114    8344 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:34.166154    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:34.309737    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:34.366296    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:34.376275    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:34.680205    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:34.809523    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:34.870180    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:34.871078    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:35.167936    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:35.309664    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:35.367454    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:35.370534    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:35.666903    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:35.810571    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:35.881740    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:35.884869    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:36.166803    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:36.310593    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:36.365601    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:36.369302    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:36.659151    8344 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:36.666599    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:36.809816    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:36.863956    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:36.866635    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:37.166543    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:37.309898    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:37.363593    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:37.367552    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:37.666983    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:37.810036    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:37.869435    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:37.873258    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:38.159293    8344 pod_ready.go:92] pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:38.159367    8344 pod_ready.go:81] duration metric: took 11.304651809s waiting for pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:38.159391    8344 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:38.173730    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:38.311237    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:38.364698    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:38.373635    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:38.670788    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:38.809756    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:38.878545    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:38.904386    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:39.167532    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:39.310019    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:39.373175    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:39.377134    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:39.675805    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:39.810568    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:39.865667    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:39.870451    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:40.169613    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:40.188918    8344 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:40.316451    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:40.363750    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:40.373322    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:40.667724    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:40.810217    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:40.870747    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:40.871225    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:41.166994    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:41.310730    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:41.364245    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:41.369963    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:41.667172    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:41.810633    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:41.866295    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:41.874045    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:42.168285    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:42.309918    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:42.364284    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:42.367164    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:42.667006    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:42.685062    8344 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:42.810195    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:42.878356    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:42.879340    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:43.168969    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:43.310800    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:43.365741    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:43.370879    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:43.667509    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:43.811324    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:43.869858    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:43.872112    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:44.166296    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:44.309352    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:44.364301    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:44.366862    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:44.667105    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:44.810169    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:44.867293    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:44.869438    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:45.166829    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:45.187382    8344 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:45.310635    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:45.363921    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:45.366765    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:45.666757    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:45.809542    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:45.865362    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:45.870611    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:46.166438    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:46.310853    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:46.368764    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:46.376698    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:46.667584    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:46.810333    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:46.865992    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:46.870832    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:47.166977    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:47.196126    8344 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:47.310177    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:47.367643    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:47.368884    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:47.666503    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:47.810328    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:47.864753    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:47.869526    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:48.167085    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:48.310208    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:48.363803    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:48.366873    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:48.667699    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:48.810757    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:48.868640    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:48.886736    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:49.166544    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:49.309654    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:49.364900    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:49.373731    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:49.666648    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:49.685026    8344 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:49.809957    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:49.863734    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:49.867375    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:50.166246    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:50.185118    8344 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:50.185139    8344 pod_ready.go:81] duration metric: took 12.025727981s waiting for pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:50.185160    8344 pod_ready.go:38] duration metric: took 26.113955326s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:37:50.185177    8344 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:37:50.185238    8344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:37:50.202894    8344 api_server.go:72] duration metric: took 56.877965839s to wait for apiserver process to appear ...
	I1205 19:37:50.202969    8344 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:37:50.203021    8344 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:37:50.216050    8344 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 19:37:50.217621    8344 api_server.go:141] control plane version: v1.28.4
	I1205 19:37:50.217643    8344 api_server.go:131] duration metric: took 14.63437ms to wait for apiserver health ...
	I1205 19:37:50.217652    8344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:37:50.227050    8344 system_pods.go:59] 18 kube-system pods found
	I1205 19:37:50.227083    8344 system_pods.go:61] "coredns-5dd5756b68-rmhkn" [04289914-4790-4f6d-9b26-c32e7df62269] Running
	I1205 19:37:50.227093    8344 system_pods.go:61] "csi-hostpath-attacher-0" [c447d03a-fc55-4a98-ab99-6bdc4c9ee7a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 19:37:50.227099    8344 system_pods.go:61] "csi-hostpath-resizer-0" [5f3d490d-5ef1-4df9-9bb4-2d88aafec0e5] Running
	I1205 19:37:50.227109    8344 system_pods.go:61] "csi-hostpathplugin-bblgk" [a46c5bbd-7a88-4a8a-8cd2-e38f0a86ef43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:37:50.227116    8344 system_pods.go:61] "etcd-addons-753790" [cc672098-b116-421c-85c9-a0782494ac32] Running
	I1205 19:37:50.227128    8344 system_pods.go:61] "kindnet-j7sxw" [6767c908-4d95-48fe-8cad-132009ede731] Running
	I1205 19:37:50.227139    8344 system_pods.go:61] "kube-apiserver-addons-753790" [c3332431-7e6b-4d8e-ab6d-39e60810e4d0] Running
	I1205 19:37:50.227144    8344 system_pods.go:61] "kube-controller-manager-addons-753790" [88ee7e00-f689-4987-bc56-0a61aa738872] Running
	I1205 19:37:50.227151    8344 system_pods.go:61] "kube-ingress-dns-minikube" [4bbdda14-9e6c-48ab-bdaa-32bfcebc5fe8] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 19:37:50.227160    8344 system_pods.go:61] "kube-proxy-8xqms" [950dcb1d-2f3f-474e-a825-0c79deff5993] Running
	I1205 19:37:50.227166    8344 system_pods.go:61] "kube-scheduler-addons-753790" [20ce1ad9-7803-437e-bce6-657460ce774f] Running
	I1205 19:37:50.227171    8344 system_pods.go:61] "metrics-server-7c66d45ddc-5nn9m" [dfdc10e3-f82d-4c2f-b28e-d02c4992cbd7] Running
	I1205 19:37:50.227177    8344 system_pods.go:61] "nvidia-device-plugin-daemonset-5g44z" [e67179c1-2a66-42ab-af09-92698daea73e] Running
	I1205 19:37:50.227184    8344 system_pods.go:61] "registry-j6vr2" [2025c2db-46b4-422f-bf24-e183c416a7ae] Running
	I1205 19:37:50.227191    8344 system_pods.go:61] "registry-proxy-6gp6x" [a29e840a-e254-486b-98ae-b646b95120f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 19:37:50.227197    8344 system_pods.go:61] "snapshot-controller-58dbcc7b99-p95jm" [7b00d0fd-5b00-4d3c-bce0-60cb2b9328c6] Running
	I1205 19:37:50.227203    8344 system_pods.go:61] "snapshot-controller-58dbcc7b99-zs27h" [f46b8fba-4b6e-471e-95ee-7639a87beca6] Running
	I1205 19:37:50.227211    8344 system_pods.go:61] "storage-provisioner" [74b4f959-2938-46db-a04a-6cbe38891fab] Running
	I1205 19:37:50.227217    8344 system_pods.go:74] duration metric: took 9.55959ms to wait for pod list to return data ...
	I1205 19:37:50.227226    8344 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:37:50.232999    8344 default_sa.go:45] found service account: "default"
	I1205 19:37:50.233024    8344 default_sa.go:55] duration metric: took 5.790905ms for default service account to be created ...
	I1205 19:37:50.233035    8344 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:37:50.246183    8344 system_pods.go:86] 18 kube-system pods found
	I1205 19:37:50.246212    8344 system_pods.go:89] "coredns-5dd5756b68-rmhkn" [04289914-4790-4f6d-9b26-c32e7df62269] Running
	I1205 19:37:50.246222    8344 system_pods.go:89] "csi-hostpath-attacher-0" [c447d03a-fc55-4a98-ab99-6bdc4c9ee7a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 19:37:50.246228    8344 system_pods.go:89] "csi-hostpath-resizer-0" [5f3d490d-5ef1-4df9-9bb4-2d88aafec0e5] Running
	I1205 19:37:50.246258    8344 system_pods.go:89] "csi-hostpathplugin-bblgk" [a46c5bbd-7a88-4a8a-8cd2-e38f0a86ef43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:37:50.246270    8344 system_pods.go:89] "etcd-addons-753790" [cc672098-b116-421c-85c9-a0782494ac32] Running
	I1205 19:37:50.246276    8344 system_pods.go:89] "kindnet-j7sxw" [6767c908-4d95-48fe-8cad-132009ede731] Running
	I1205 19:37:50.246281    8344 system_pods.go:89] "kube-apiserver-addons-753790" [c3332431-7e6b-4d8e-ab6d-39e60810e4d0] Running
	I1205 19:37:50.246286    8344 system_pods.go:89] "kube-controller-manager-addons-753790" [88ee7e00-f689-4987-bc56-0a61aa738872] Running
	I1205 19:37:50.246300    8344 system_pods.go:89] "kube-ingress-dns-minikube" [4bbdda14-9e6c-48ab-bdaa-32bfcebc5fe8] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 19:37:50.246306    8344 system_pods.go:89] "kube-proxy-8xqms" [950dcb1d-2f3f-474e-a825-0c79deff5993] Running
	I1205 19:37:50.246314    8344 system_pods.go:89] "kube-scheduler-addons-753790" [20ce1ad9-7803-437e-bce6-657460ce774f] Running
	I1205 19:37:50.246334    8344 system_pods.go:89] "metrics-server-7c66d45ddc-5nn9m" [dfdc10e3-f82d-4c2f-b28e-d02c4992cbd7] Running
	I1205 19:37:50.246349    8344 system_pods.go:89] "nvidia-device-plugin-daemonset-5g44z" [e67179c1-2a66-42ab-af09-92698daea73e] Running
	I1205 19:37:50.246354    8344 system_pods.go:89] "registry-j6vr2" [2025c2db-46b4-422f-bf24-e183c416a7ae] Running
	I1205 19:37:50.246362    8344 system_pods.go:89] "registry-proxy-6gp6x" [a29e840a-e254-486b-98ae-b646b95120f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 19:37:50.246370    8344 system_pods.go:89] "snapshot-controller-58dbcc7b99-p95jm" [7b00d0fd-5b00-4d3c-bce0-60cb2b9328c6] Running
	I1205 19:37:50.246376    8344 system_pods.go:89] "snapshot-controller-58dbcc7b99-zs27h" [f46b8fba-4b6e-471e-95ee-7639a87beca6] Running
	I1205 19:37:50.246380    8344 system_pods.go:89] "storage-provisioner" [74b4f959-2938-46db-a04a-6cbe38891fab] Running
	I1205 19:37:50.246389    8344 system_pods.go:126] duration metric: took 13.347156ms to wait for k8s-apps to be running ...
	I1205 19:37:50.246399    8344 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:37:50.246452    8344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:37:50.265730    8344 system_svc.go:56] duration metric: took 19.321388ms WaitForService to wait for kubelet.
	I1205 19:37:50.265757    8344 kubeadm.go:581] duration metric: took 56.940835101s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 19:37:50.265783    8344 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:37:50.272402    8344 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 19:37:50.272430    8344 node_conditions.go:123] node cpu capacity is 2
	I1205 19:37:50.272441    8344 node_conditions.go:105] duration metric: took 6.653344ms to run NodePressure ...
	I1205 19:37:50.272452    8344 start.go:228] waiting for startup goroutines ...
	I1205 19:37:50.310447    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:50.366196    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:50.373243    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:50.668018    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:50.809704    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:50.865242    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:50.874154    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:51.166804    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:51.313332    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:51.373433    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:51.386648    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:51.673329    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:51.814696    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:51.865990    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:51.872141    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:52.168928    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:52.309605    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:52.365134    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:52.368774    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:52.666539    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:52.809871    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:52.864301    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:52.870903    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:53.167481    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:53.310007    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:53.363856    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:53.367400    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:53.667209    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:53.809743    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:53.863718    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:53.867476    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:54.170780    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:54.310872    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:54.369032    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:54.375576    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:54.666623    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:54.813837    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:54.865263    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:54.869868    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:55.168730    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:55.310215    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:55.363515    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:55.367196    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:55.666885    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:55.810619    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:55.896162    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:55.897134    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:56.172316    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:56.311112    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:56.366210    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:56.369572    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:56.668951    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:56.811863    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:56.863996    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:56.867057    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:57.166754    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:57.310269    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:57.363496    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:57.367164    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:57.665948    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:57.809494    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:57.863736    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:57.867394    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:58.166937    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:58.310427    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:58.366127    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:58.369849    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:58.666691    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:58.810951    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:58.866253    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:58.868737    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:59.167888    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:59.309445    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:59.363852    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:59.366997    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:59.667449    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:59.810236    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:59.866400    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:59.870827    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:38:00.168789    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:00.310208    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:00.367215    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:00.370888    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:38:00.670052    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:00.812699    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:00.870175    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:00.875138    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:38:01.167676    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:01.309816    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:01.365546    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:01.369043    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:38:01.666610    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:01.810638    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:01.863308    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:01.867229    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:38:02.167233    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:02.309600    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:02.364390    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:02.367171    8344 kapi.go:107] duration metric: took 1m3.525198287s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 19:38:02.666814    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:02.813224    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:02.863930    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:03.166548    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:03.309862    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:03.363647    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:03.667310    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:03.809997    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:03.864074    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:04.167632    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:04.310453    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:04.364371    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:04.666593    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:04.810039    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:04.863911    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:05.166121    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:05.309838    8344 kapi.go:107] duration metric: took 1m3.033731814s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 19:38:05.312038    8344 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-753790 cluster.
	I1205 19:38:05.314983    8344 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 19:38:05.317143    8344 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 19:38:05.364348    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:05.667696    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:05.864385    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:06.167773    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:06.365953    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:06.667802    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:06.864969    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:07.166988    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:07.363484    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:07.666944    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:07.864879    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:08.166735    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:08.363893    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:08.666791    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:08.864193    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:09.166282    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:09.364667    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:09.666124    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:09.864137    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:10.166751    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:10.363975    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:10.669369    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:10.865070    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:11.168351    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:11.387086    8344 kapi.go:107] duration metric: took 1m12.547105276s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 19:38:11.667067    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:12.168174    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:12.667029    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:13.190404    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:13.667246    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:14.166403    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:14.666780    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:15.166419    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:15.666278    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:16.167034    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:16.669865    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:17.167223    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:17.667281    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:18.166813    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:18.671659    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:19.167090    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:19.666378    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:20.166473    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:20.666318    8344 kapi.go:107] duration metric: took 1m21.525073367s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 19:38:20.668810    8344 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, default-storageclass, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, metrics-server, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1205 19:38:20.670687    8344 addons.go:502] enable addons completed in 1m27.65390539s: enabled=[cloud-spanner nvidia-device-plugin ingress-dns default-storageclass storage-provisioner storage-provisioner-rancher inspektor-gadget metrics-server volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1205 19:38:20.670727    8344 start.go:233] waiting for cluster config update ...
	I1205 19:38:20.670759    8344 start.go:242] writing updated cluster config ...
	I1205 19:38:20.671066    8344 ssh_runner.go:195] Run: rm -f paused
	I1205 19:38:21.009787    8344 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 19:38:21.012604    8344 out.go:177] * Done! kubectl is now configured to use "addons-753790" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 05 19:42:26 addons-753790 crio[893]: time="2023-12-05 19:42:26.353200791Z" level=info msg="Closing host port tcp:80"
	Dec 05 19:42:26 addons-753790 crio[893]: time="2023-12-05 19:42:26.353240652Z" level=info msg="Closing host port tcp:443"
	Dec 05 19:42:26 addons-753790 crio[893]: time="2023-12-05 19:42:26.354676775Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 05 19:42:26 addons-753790 crio[893]: time="2023-12-05 19:42:26.354700062Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 05 19:42:26 addons-753790 crio[893]: time="2023-12-05 19:42:26.354846557Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-8mcwh Namespace:ingress-nginx ID:936c33a7a1c7836eeef305297e12bc4b024f1344de7181a23ee1239d9344bc45 UID:63f68af4-4de8-4d3d-9f2f-14c3abbffa03 NetNS:/var/run/netns/0540b228-3f61-4a4b-9e7a-cca4297295e3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 05 19:42:26 addons-753790 crio[893]: time="2023-12-05 19:42:26.354979432Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-8mcwh from CNI network \"kindnet\" (type=ptp)"
	Dec 05 19:42:26 addons-753790 crio[893]: time="2023-12-05 19:42:26.377154535Z" level=info msg="Stopped pod sandbox: 936c33a7a1c7836eeef305297e12bc4b024f1344de7181a23ee1239d9344bc45" id=9fdb89db-6426-40c3-88f5-9c93ba355277 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 05 19:42:26 addons-753790 crio[893]: time="2023-12-05 19:42:26.460858308Z" level=info msg="Removing container: fe95aa644c1e47a77c36eb9ca5572171936682a01506a9cdc696e82663ea7175" id=da2b94ad-2409-428b-8885-df2b0a0185b1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 19:42:26 addons-753790 crio[893]: time="2023-12-05 19:42:26.478662386Z" level=info msg="Removed container fe95aa644c1e47a77c36eb9ca5572171936682a01506a9cdc696e82663ea7175: ingress-nginx/ingress-nginx-controller-7c6974c4d8-8mcwh/controller" id=da2b94ad-2409-428b-8885-df2b0a0185b1 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.489643787Z" level=info msg="Checking image status: ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" id=7383e61d-682d-4c77-96b1-aa4411ff7f11 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.489895088Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:45e33ff5627bef80cc4abebf01df370198c2f8e21477685063cd5dd2a33b648c,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:4decec48d0f1fdd5d28e85b558eddef3ba91bbf7ebc7f43b5ec6a86b210a78c9 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931],Size_:248786914,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=7383e61d-682d-4c77-96b1-aa4411ff7f11 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.491344471Z" level=info msg="Pulling image: ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" id=8d8a4f11-3f22-42f4-951d-76f5eab8241c name=/runtime.v1.ImageService/PullImage
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.494328719Z" level=info msg="Trying to access \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931\""
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.767882098Z" level=info msg="Pulled image: ghcr.io/inspektor-gadget/inspektor-gadget@sha256:4decec48d0f1fdd5d28e85b558eddef3ba91bbf7ebc7f43b5ec6a86b210a78c9" id=8d8a4f11-3f22-42f4-951d-76f5eab8241c name=/runtime.v1.ImageService/PullImage
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.768838447Z" level=info msg="Checking image status: ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" id=d2575ad4-fa38-4198-a4dc-e570ec1875ff name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.769120805Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:45e33ff5627bef80cc4abebf01df370198c2f8e21477685063cd5dd2a33b648c,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:4decec48d0f1fdd5d28e85b558eddef3ba91bbf7ebc7f43b5ec6a86b210a78c9 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931],Size_:248786914,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=d2575ad4-fa38-4198-a4dc-e570ec1875ff name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.770087484Z" level=info msg="Creating container: gadget/gadget-qxcgc/gadget" id=5d049c77-9630-4fbc-b9a5-78d1d6514f2c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.770169675Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 19:42:28 addons-753790 conmon[7789]: conmon 41d822ba4128232c3ae8 <nwarn>: runtime stderr: time="2023-12-05T19:42:28Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	                                            time="2023-12-05T19:42:28Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	                                            time="2023-12-05T19:42:28Z" level=warning msg="lstat : no such file or directory"
	                                            time="2023-12-05T19:42:28Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:42:28 addons-753790 conmon[7789]: conmon 41d822ba4128232c3ae8 <error>: Failed to create container: exit status 1
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.838496805Z" level=error msg="Container creation error: time=\"2023-12-05T19:42:28Z\" level=warning msg=\"cannot toggle freezer: cgroups not configured for container\"\ntime=\"2023-12-05T19:42:28Z\" level=warning msg=\"cannot toggle freezer: cgroups not configured for container\"\ntime=\"2023-12-05T19:42:28Z\" level=warning msg=\"lstat : no such file or directory\"\ntime=\"2023-12-05T19:42:28Z\" level=error msg=\"container_linux.go:380: starting container process caused: exec: \\\"/entrypoint.sh\\\": stat /entrypoint.sh: no such file or directory\"\n" id=5d049c77-9630-4fbc-b9a5-78d1d6514f2c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.846150706Z" level=info msg="createCtr: deleting container ID 41d822ba4128232c3ae8a6601b991470aba0a459aca8e5c6334722d9aef4accf from idIndex" id=5d049c77-9630-4fbc-b9a5-78d1d6514f2c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.846200930Z" level=info msg="createCtr: deleting container ID 41d822ba4128232c3ae8a6601b991470aba0a459aca8e5c6334722d9aef4accf from idIndex" id=5d049c77-9630-4fbc-b9a5-78d1d6514f2c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.846219269Z" level=info msg="createCtr: deleting container ID 41d822ba4128232c3ae8a6601b991470aba0a459aca8e5c6334722d9aef4accf from idIndex" id=5d049c77-9630-4fbc-b9a5-78d1d6514f2c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:42:28 addons-753790 crio[893]: time="2023-12-05 19:42:28.853264918Z" level=info msg="createCtr: deleting container ID 41d822ba4128232c3ae8a6601b991470aba0a459aca8e5c6334722d9aef4accf from idIndex" id=5d049c77-9630-4fbc-b9a5-78d1d6514f2c name=/runtime.v1.RuntimeService/CreateContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	83bfffe9093bd       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             7 seconds ago       Exited              hello-world-app           2                   7a9f75b53ae70       hello-world-app-5d77478584-hfd2s
	248179e129740       docker.io/library/nginx@sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7                              2 minutes ago       Running             nginx                     0                   0dfb8124e8975       nginx
	e9bc574b338f9       ghcr.io/headlamp-k8s/headlamp@sha256:7a9587036bd29304f8f1387a7245556a3c479434670b2ca58e3624d44d2a68c9                        2 minutes ago       Running             headlamp                  0                   df30d83a277a6       headlamp-777fd4b855-4wt8j
	1d3df6e6d00dc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 4 minutes ago       Running             gcp-auth                  0                   d436ce15d9b7b       gcp-auth-d4c87556c-hzq5m
	1c0e5a9dc592a       af594c6a879f2e441ea446a122296abbbe11aae5547e780f2582fbcda5df271c                                                             4 minutes ago       Exited              patch                     1                   1f1ad679cc244       ingress-nginx-admission-patch-dcpfl
	d303014b8d9b9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   4 minutes ago       Exited              create                    0                   8cceeed6b837f       ingress-nginx-admission-create-t979k
	d3ac64f27fd20       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   e2d165cc33a69       storage-provisioner
	5490b1908a513       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             5 minutes ago       Running             coredns                   0                   0a827e54023ce       coredns-5dd5756b68-rmhkn
	42b1944b80035       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                             5 minutes ago       Running             kube-proxy                0                   ea1a9cfa74b27       kube-proxy-8xqms
	fed428c064458       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             5 minutes ago       Running             kindnet-cni               0                   c8e6f8c914957       kindnet-j7sxw
	f402a5f264d2f       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                             5 minutes ago       Running             kube-controller-manager   0                   3536c8b40fb94       kube-controller-manager-addons-753790
	940f8074d6bd5       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                             5 minutes ago       Running             kube-apiserver            0                   e378c24b59eda       kube-apiserver-addons-753790
	49e03b8e4b31d       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                             5 minutes ago       Running             kube-scheduler            0                   d0118c0a56c79       kube-scheduler-addons-753790
	efad096daa660       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago       Running             etcd                      0                   a9011d0d4417c       etcd-addons-753790
	
	* 
	* ==> coredns [5490b1908a51341623358c1eb0b51c35ee5b88da19aaf50b3eaa21aacacae120] <==
	* [INFO] 10.244.0.18:36855 - 59428 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047565s
	[INFO] 10.244.0.18:36855 - 62673 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076226s
	[INFO] 10.244.0.18:36855 - 31863 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052957s
	[INFO] 10.244.0.18:36855 - 49236 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065633s
	[INFO] 10.244.0.18:36855 - 7934 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001108703s
	[INFO] 10.244.0.18:36855 - 7223 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000957055s
	[INFO] 10.244.0.18:36855 - 25649 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049699s
	[INFO] 10.244.0.18:45570 - 64233 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000103763s
	[INFO] 10.244.0.18:33219 - 36579 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000111124s
	[INFO] 10.244.0.18:45570 - 33825 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098561s
	[INFO] 10.244.0.18:33219 - 39345 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000070098s
	[INFO] 10.244.0.18:45570 - 56380 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000102458s
	[INFO] 10.244.0.18:45570 - 10064 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060644s
	[INFO] 10.244.0.18:45570 - 32758 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065363s
	[INFO] 10.244.0.18:45570 - 51718 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051733s
	[INFO] 10.244.0.18:45570 - 26821 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001469502s
	[INFO] 10.244.0.18:33219 - 4631 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000152953s
	[INFO] 10.244.0.18:33219 - 7117 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000632908s
	[INFO] 10.244.0.18:45570 - 5975 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001651075s
	[INFO] 10.244.0.18:33219 - 37753 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000115972s
	[INFO] 10.244.0.18:45570 - 13725 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061777s
	[INFO] 10.244.0.18:33219 - 62665 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037424s
	[INFO] 10.244.0.18:33219 - 5809 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00109793s
	[INFO] 10.244.0.18:33219 - 28366 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002802067s
	[INFO] 10.244.0.18:33219 - 51221 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061031s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-753790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-753790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=addons-753790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T19_36_41_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-753790
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 19:36:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-753790
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 19:42:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 19:42:17 +0000   Tue, 05 Dec 2023 19:36:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 19:42:17 +0000   Tue, 05 Dec 2023 19:36:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 19:42:17 +0000   Tue, 05 Dec 2023 19:36:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 19:42:17 +0000   Tue, 05 Dec 2023 19:37:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-753790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 10f3442ae5bd4cb6ac1d005edf4e5579
	  System UUID:                a984fe80-b922-46c4-acc7-231aa98aa32e
	  Boot ID:                    ade55ee8-b6ef-4756-8af5-2453aa07c908
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-hfd2s         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  gadget                      gadget-qxcgc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  gcp-auth                    gcp-auth-d4c87556c-hzq5m                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  headlamp                    headlamp-777fd4b855-4wt8j                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	  kube-system                 coredns-5dd5756b68-rmhkn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m39s
	  kube-system                 etcd-addons-753790                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m51s
	  kube-system                 kindnet-j7sxw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m39s
	  kube-system                 kube-apiserver-addons-753790             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-controller-manager-addons-753790    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-proxy-8xqms                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  kube-system                 kube-scheduler-addons-753790             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m33s                  kube-proxy       
	  Normal  Starting                 5m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet          Node addons-753790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x8 over 5m58s)  kubelet          Node addons-753790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x8 over 5m58s)  kubelet          Node addons-753790 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m51s                  kubelet          Node addons-753790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m51s                  kubelet          Node addons-753790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m51s                  kubelet          Node addons-753790 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m40s                  node-controller  Node addons-753790 event: Registered Node addons-753790 in Controller
	  Normal  NodeReady                5m8s                   kubelet          Node addons-753790 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Dec 5 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015635] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.321413] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.302274] kauditd_printk_skb: 26 callbacks suppressed
	
	* 
	* ==> etcd [efad096daa6601649f8ea74e53d8bbd7484d55852d7c430fbc34eda28bc180a3] <==
	* {"level":"info","ts":"2023-12-05T19:36:34.030425Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-05T19:36:34.030545Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-05T19:36:34.030618Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-05T19:36:34.030666Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-05T19:36:34.035983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-12-05T19:36:34.036164Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-05T19:36:34.975795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-05T19:36:34.975911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-05T19:36:34.975951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-12-05T19:36:34.975996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-12-05T19:36:34.976027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-05T19:36:34.976065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-12-05T19:36:34.9761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-05T19:36:34.979867Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:36:34.98393Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-753790 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-05T19:36:34.987812Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:36:34.987923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:36:34.987973Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:36:34.988009Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T19:36:34.98901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-05T19:36:34.989097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T19:36:34.991816Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-05T19:36:34.991893Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-05T19:36:34.992654Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-05T19:36:56.110304Z","caller":"traceutil/trace.go:171","msg":"trace[257927791] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"113.16543ms","start":"2023-12-05T19:36:55.997124Z","end":"2023-12-05T19:36:56.110289Z","steps":["trace[257927791] 'process raft request'  (duration: 51.277794ms)","trace[257927791] 'compare'  (duration: 61.821781ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [1d3df6e6d00dc10181a118ee13946f20fe47e312f4b1ffb5871be1445f12b7fa] <==
	* 2023/12/05 19:38:04 GCP Auth Webhook started!
	2023/12/05 19:38:31 Ready to marshal response ...
	2023/12/05 19:38:31 Ready to write response ...
	2023/12/05 19:38:42 Ready to marshal response ...
	2023/12/05 19:38:42 Ready to write response ...
	2023/12/05 19:38:42 Ready to marshal response ...
	2023/12/05 19:38:42 Ready to write response ...
	2023/12/05 19:38:50 Ready to marshal response ...
	2023/12/05 19:38:50 Ready to write response ...
	2023/12/05 19:38:57 Ready to marshal response ...
	2023/12/05 19:38:57 Ready to write response ...
	2023/12/05 19:39:14 Ready to marshal response ...
	2023/12/05 19:39:14 Ready to write response ...
	2023/12/05 19:39:35 Ready to marshal response ...
	2023/12/05 19:39:35 Ready to write response ...
	2023/12/05 19:39:35 Ready to marshal response ...
	2023/12/05 19:39:35 Ready to write response ...
	2023/12/05 19:39:35 Ready to marshal response ...
	2023/12/05 19:39:35 Ready to write response ...
	2023/12/05 19:39:46 Ready to marshal response ...
	2023/12/05 19:39:46 Ready to write response ...
	2023/12/05 19:42:05 Ready to marshal response ...
	2023/12/05 19:42:05 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:42:31 up 24 min,  0 users,  load average: 0.19, 0.57, 0.37
	Linux addons-753790 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [fed428c0644589df3c411742042bdbbd3affc28eeaf51b382ea5b1dda67305a3] <==
	* I1205 19:40:23.564674       1 main.go:227] handling current node
	I1205 19:40:33.576096       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:40:33.576123       1 main.go:227] handling current node
	I1205 19:40:43.588054       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:40:43.588083       1 main.go:227] handling current node
	I1205 19:40:53.592055       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:40:53.592081       1 main.go:227] handling current node
	I1205 19:41:03.604647       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:41:03.604677       1 main.go:227] handling current node
	I1205 19:41:13.608606       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:41:13.608634       1 main.go:227] handling current node
	I1205 19:41:23.621476       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:41:23.621501       1 main.go:227] handling current node
	I1205 19:41:33.625970       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:41:33.626000       1 main.go:227] handling current node
	I1205 19:41:43.631688       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:41:43.631713       1 main.go:227] handling current node
	I1205 19:41:53.635739       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:41:53.635974       1 main.go:227] handling current node
	I1205 19:42:03.648448       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:42:03.648477       1 main.go:227] handling current node
	I1205 19:42:13.660228       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:42:13.660253       1 main.go:227] handling current node
	I1205 19:42:23.672663       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:42:23.672691       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [940f8074d6bd526a437a01a29138b4d811e400064a5968a96703989071fc2704] <==
	* I1205 19:38:37.236742       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1205 19:39:06.532374       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1205 19:39:08.450318       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1205 19:39:30.366542       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:39:30.366651       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:39:30.382653       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:39:30.382705       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:39:30.403337       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:39:30.403460       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:39:30.447864       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:39:30.448013       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:39:30.500272       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:39:30.500407       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:39:30.527851       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:39:30.527892       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 19:39:31.403714       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 19:39:31.528107       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 19:39:31.534577       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1205 19:39:35.228412       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.246.102"}
	I1205 19:39:37.243775       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1205 19:39:46.091177       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 19:39:46.389307       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.176.121"}
	I1205 19:40:38.964481       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1205 19:41:37.602032       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1205 19:42:06.031352       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.177.56"}
	
	* 
	* ==> kube-controller-manager [f402a5f264d2f85c867a58fbc63ef3df203f1c3e2c8361c8379ee70c9ce2d383] <==
	* W1205 19:40:57.170266       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:40:57.170300       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:41:04.874435       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:41:04.874467       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:41:44.534077       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:41:44.534109       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:41:45.136927       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:41:45.136963       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:41:47.753406       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:41:47.753437       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1205 19:42:05.784192       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1205 19:42:05.811998       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-hfd2s"
	I1205 19:42:05.819307       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="35.759672ms"
	I1205 19:42:05.829632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.196462ms"
	I1205 19:42:05.829773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="31.878µs"
	I1205 19:42:05.846349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="78.253µs"
	I1205 19:42:08.439054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="81.428µs"
	I1205 19:42:09.450187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.839µs"
	I1205 19:42:10.436268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.655µs"
	W1205 19:42:20.832952       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:42:20.832984       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1205 19:42:23.132642       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1205 19:42:23.140647       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="4.48µs"
	I1205 19:42:23.143713       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1205 19:42:24.475423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="54.532µs"
	
	* 
	* ==> kube-proxy [42b1944b800350c918edede48d949a74a384517b604dec31f44caab9433173b6] <==
	* I1205 19:36:54.004249       1 server_others.go:69] "Using iptables proxy"
	I1205 19:36:56.521209       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1205 19:36:58.561313       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 19:36:58.565084       1 server_others.go:152] "Using iptables Proxier"
	I1205 19:36:58.565167       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1205 19:36:58.565200       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1205 19:36:58.565301       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 19:36:58.565568       1 server.go:846] "Version info" version="v1.28.4"
	I1205 19:36:58.565735       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:36:58.567018       1 config.go:188] "Starting service config controller"
	I1205 19:36:58.567125       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 19:36:58.567171       1 config.go:97] "Starting endpoint slice config controller"
	I1205 19:36:58.567199       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 19:36:58.567722       1 config.go:315] "Starting node config controller"
	I1205 19:36:58.569872       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 19:36:58.668484       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 19:36:58.668737       1 shared_informer.go:318] Caches are synced for service config
	I1205 19:36:58.670896       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [49e03b8e4b31d4071c934d34366e1605b553bf107e4a169c473753f4b5868652] <==
	* W1205 19:36:37.640539       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:36:37.640571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1205 19:36:37.647497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:36:37.647538       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 19:36:37.647625       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:36:37.647648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 19:36:37.647736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 19:36:37.647767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1205 19:36:37.647739       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:36:37.647793       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 19:36:37.647849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 19:36:37.647864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1205 19:36:37.647909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:36:37.647957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1205 19:36:37.647929       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:36:37.648026       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1205 19:36:37.647999       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 19:36:37.648096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1205 19:36:37.648059       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:36:37.648182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 19:36:37.655998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:36:37.656037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 19:36:38.517544       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:36:38.517675       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1205 19:36:41.023836       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 05 19:42:24 addons-753790 kubelet[1361]: I1205 19:42:24.490542    1361 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="da4ecba3-a6e9-47b4-ac80-69bd9b2c323a" path="/var/lib/kubelet/pods/da4ecba3-a6e9-47b4-ac80-69bd9b2c323a/volumes"
	Dec 05 19:42:24 addons-753790 kubelet[1361]: E1205 19:42:24.590773    1361 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b0e53cace67050b2191de9ffb0dad3f8d52c7738626a091754508c1fcfadc2ab/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b0e53cace67050b2191de9ffb0dad3f8d52c7738626a091754508c1fcfadc2ab/diff: no such file or directory, extraDiskErr: <nil>
	Dec 05 19:42:24 addons-753790 kubelet[1361]: E1205 19:42:24.693643    1361 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f13b7326e0ae383e98e2f78dbf0fec75351a70ccaec2f33d55c87d86ad930c18/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f13b7326e0ae383e98e2f78dbf0fec75351a70ccaec2f33d55c87d86ad930c18/diff: no such file or directory, extraDiskErr: <nil>
	Dec 05 19:42:26 addons-753790 kubelet[1361]: I1205 19:42:26.459536    1361 scope.go:117] "RemoveContainer" containerID="fe95aa644c1e47a77c36eb9ca5572171936682a01506a9cdc696e82663ea7175"
	Dec 05 19:42:26 addons-753790 kubelet[1361]: I1205 19:42:26.478913    1361 scope.go:117] "RemoveContainer" containerID="fe95aa644c1e47a77c36eb9ca5572171936682a01506a9cdc696e82663ea7175"
	Dec 05 19:42:26 addons-753790 kubelet[1361]: E1205 19:42:26.479286    1361 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe95aa644c1e47a77c36eb9ca5572171936682a01506a9cdc696e82663ea7175\": container with ID starting with fe95aa644c1e47a77c36eb9ca5572171936682a01506a9cdc696e82663ea7175 not found: ID does not exist" containerID="fe95aa644c1e47a77c36eb9ca5572171936682a01506a9cdc696e82663ea7175"
	Dec 05 19:42:26 addons-753790 kubelet[1361]: I1205 19:42:26.479364    1361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe95aa644c1e47a77c36eb9ca5572171936682a01506a9cdc696e82663ea7175"} err="failed to get container status \"fe95aa644c1e47a77c36eb9ca5572171936682a01506a9cdc696e82663ea7175\": rpc error: code = NotFound desc = could not find container \"fe95aa644c1e47a77c36eb9ca5572171936682a01506a9cdc696e82663ea7175\": container with ID starting with fe95aa644c1e47a77c36eb9ca5572171936682a01506a9cdc696e82663ea7175 not found: ID does not exist"
	Dec 05 19:42:26 addons-753790 kubelet[1361]: I1205 19:42:26.525980    1361 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63f68af4-4de8-4d3d-9f2f-14c3abbffa03-webhook-cert\") pod \"63f68af4-4de8-4d3d-9f2f-14c3abbffa03\" (UID: \"63f68af4-4de8-4d3d-9f2f-14c3abbffa03\") "
	Dec 05 19:42:26 addons-753790 kubelet[1361]: I1205 19:42:26.526032    1361 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm848\" (UniqueName: \"kubernetes.io/projected/63f68af4-4de8-4d3d-9f2f-14c3abbffa03-kube-api-access-hm848\") pod \"63f68af4-4de8-4d3d-9f2f-14c3abbffa03\" (UID: \"63f68af4-4de8-4d3d-9f2f-14c3abbffa03\") "
	Dec 05 19:42:26 addons-753790 kubelet[1361]: I1205 19:42:26.528314    1361 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f68af4-4de8-4d3d-9f2f-14c3abbffa03-kube-api-access-hm848" (OuterVolumeSpecName: "kube-api-access-hm848") pod "63f68af4-4de8-4d3d-9f2f-14c3abbffa03" (UID: "63f68af4-4de8-4d3d-9f2f-14c3abbffa03"). InnerVolumeSpecName "kube-api-access-hm848". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 05 19:42:26 addons-753790 kubelet[1361]: I1205 19:42:26.529248    1361 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f68af4-4de8-4d3d-9f2f-14c3abbffa03-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "63f68af4-4de8-4d3d-9f2f-14c3abbffa03" (UID: "63f68af4-4de8-4d3d-9f2f-14c3abbffa03"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 05 19:42:26 addons-753790 kubelet[1361]: I1205 19:42:26.626603    1361 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hm848\" (UniqueName: \"kubernetes.io/projected/63f68af4-4de8-4d3d-9f2f-14c3abbffa03-kube-api-access-hm848\") on node \"addons-753790\" DevicePath \"\""
	Dec 05 19:42:26 addons-753790 kubelet[1361]: I1205 19:42:26.626640    1361 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63f68af4-4de8-4d3d-9f2f-14c3abbffa03-webhook-cert\") on node \"addons-753790\" DevicePath \"\""
	Dec 05 19:42:28 addons-753790 kubelet[1361]: I1205 19:42:28.492941    1361 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="63f68af4-4de8-4d3d-9f2f-14c3abbffa03" path="/var/lib/kubelet/pods/63f68af4-4de8-4d3d-9f2f-14c3abbffa03/volumes"
	Dec 05 19:42:28 addons-753790 kubelet[1361]: E1205 19:42:28.853561    1361 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err=<
	Dec 05 19:42:28 addons-753790 kubelet[1361]:         rpc error: code = Unknown desc = container create failed: time="2023-12-05T19:42:28Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:42:28 addons-753790 kubelet[1361]:         time="2023-12-05T19:42:28Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:42:28 addons-753790 kubelet[1361]:         time="2023-12-05T19:42:28Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:42:28 addons-753790 kubelet[1361]:         time="2023-12-05T19:42:28Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:42:28 addons-753790 kubelet[1361]:  > podSandboxID="1a9ede046909add9684135c149ff559d1545071f5ebb837fc96c544250cc557d"
	Dec 05 19:42:28 addons-753790 kubelet[1361]: E1205 19:42:28.853728    1361 kuberuntime_manager.go:1261] container &Container{Name:gadget,Image:ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931,Command:[/entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_POD_UID,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.uid,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVers
ion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_IMAGE,Value:ghcr.io/inspektor-gadget/inspektor-gadget,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_VERSION,Value:v0.16.1,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_HOOK_MODE,Value:auto,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_FALLBACK_POD_INFORMER,Value:true,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CONTAINERD_SOCKETPATH,Value:/run/containerd/containerd.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CRIO_SOCKETPATH,Value:/run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_DOCKER_SOCKETPATH,Value:/run/docker.sock,ValueFrom:nil,},EnvVar{Name:HOST_ROOT,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Clai
ms:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:modules,ReadOnly:false,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:debugfs,ReadOnly:false,MountPath:/sys/kernel/debug,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cgroup,ReadOnly:false,MountPath:/sys/fs/cgroup,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bpffs,ReadOnly:false,MountPath:/sys/fs/bpf,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-snrgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,Pe
riodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYSLOG SYS_PTRACE SYS_RESOURCE IPC_LOCK SYS_MODULE NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod gadget-qxcgc_gadget(97bec43a-0805-4763-9862-53819201c4e8): CreateContainerError: container create failed: time="2023-12-05T19:42:28Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:42:28 addons-753790 kubelet[1361]: time="2023-12-05T19:42:28Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:42:28 addons-753790 kubelet[1361]: time="2023-12-05T19:42:28Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:42:28 addons-753790 kubelet[1361]: time="2023-12-05T19:42:28Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:42:28 addons-753790 kubelet[1361]: E1205 19:42:28.853778    1361 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CreateContainerError: \"container create failed: time=\\\"2023-12-05T19:42:28Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:42:28Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:42:28Z\\\" level=warning msg=\\\"lstat : no such file or directory\\\"\\ntime=\\\"2023-12-05T19:42:28Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: exec: \\\\\\\"/entrypoint.sh\\\\\\\": stat /entrypoint.sh: no such file or directory\\\"\\n\"" pod="gadget/gadget-qxcgc" podUID="97bec43a-0805-4763-9862-53819201c4e8"
	
	* 
	* ==> storage-provisioner [d3ac64f27fd207935cf6e7d2b9db91f81624dd57d7d3c03202c425be5c5d0591] <==
	* I1205 19:37:24.831015       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:37:24.847516       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:37:24.847712       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:37:24.855360       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:37:24.855516       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-753790_0e8b6e2e-197e-4d66-a0f4-e0115409849b!
	I1205 19:37:24.855506       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d43edea0-fe69-4b72-abe1-b28a5b73d893", APIVersion:"v1", ResourceVersion:"880", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-753790_0e8b6e2e-197e-4d66-a0f4-e0115409849b became leader
	I1205 19:37:24.958132       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-753790_0e8b6e2e-197e-4d66-a0f4-e0115409849b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-753790 -n addons-753790
helpers_test.go:261: (dbg) Run:  kubectl --context addons-753790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: gadget-qxcgc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-753790 describe pod gadget-qxcgc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-753790 describe pod gadget-qxcgc: exit status 1 (89.801288ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gadget-qxcgc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-753790 describe pod gadget-qxcgc: exit status 1
--- FAIL: TestAddons/parallel/Ingress (167.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (483.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qxcgc" [97bec43a-0805-4763-9862-53819201c4e8] Pending / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: ***** TestAddons/parallel/InspektorGadget: pod "k8s-app=gadget" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:837: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-753790 -n addons-753790
addons_test.go:837: TestAddons/parallel/InspektorGadget: showing logs for failed pods as of 2023-12-05 19:47:42.509794576 +0000 UTC m=+767.712390578
addons_test.go:837: (dbg) Run:  kubectl --context addons-753790 describe po gadget-qxcgc -n gadget
addons_test.go:837: (dbg) kubectl --context addons-753790 describe po gadget-qxcgc -n gadget:
Name:             gadget-qxcgc
Namespace:        gadget
Priority:         0
Service Account:  gadget
Node:             addons-753790/192.168.49.2
Start Time:       Tue, 05 Dec 2023 19:36:58 +0000
Labels:           controller-revision-hash=5d55b57d4c
k8s-app=gadget
pod-template-generation=1
Annotations:      container.apparmor.security.beta.kubernetes.io/gadget: unconfined
inspektor-gadget.kinvolk.io/option-hook-mode: auto
Status:           Pending
IP:               192.168.49.2
IPs:
IP:           192.168.49.2
Controlled By:  DaemonSet/gadget
Containers:
gadget:
Container ID:  
Image:         ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
/entrypoint.sh
State:          Waiting
Reason:       CreateContainerError
Ready:          False
Restart Count:  0
Liveness:       exec [/bin/gadgettracermanager -liveness] delay=0s timeout=2s period=5s #success=1 #failure=3
Readiness:      exec [/bin/gadgettracermanager -liveness] delay=0s timeout=2s period=5s #success=1 #failure=3
Environment:
NODE_NAME:                                       (v1:spec.nodeName)
GADGET_POD_UID:                                  (v1:metadata.uid)
TRACELOOP_NODE_NAME:                             (v1:spec.nodeName)
TRACELOOP_POD_NAME:                             gadget-qxcgc (v1:metadata.name)
TRACELOOP_POD_NAMESPACE:                        gadget (v1:metadata.namespace)
GADGET_IMAGE:                                   ghcr.io/inspektor-gadget/inspektor-gadget
INSPEKTOR_GADGET_VERSION:                       v0.16.1
INSPEKTOR_GADGET_OPTION_HOOK_MODE:              auto
INSPEKTOR_GADGET_OPTION_FALLBACK_POD_INFORMER:  true
INSPEKTOR_GADGET_CONTAINERD_SOCKETPATH:         /run/containerd/containerd.sock
INSPEKTOR_GADGET_CRIO_SOCKETPATH:               /run/crio/crio.sock
INSPEKTOR_GADGET_DOCKER_SOCKETPATH:             /run/docker.sock
HOST_ROOT:                                      /host
Mounts:
/host from host (rw)
/lib/modules from modules (rw)
/run from run (rw)
/sys/fs/bpf from bpffs (rw)
/sys/fs/cgroup from cgroup (rw)
/sys/kernel/debug from debugfs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-snrgg (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
host:
Type:          HostPath (bare host directory volume)
Path:          /
HostPathType:  
run:
Type:          HostPath (bare host directory volume)
Path:          /run
HostPathType:  
cgroup:
Type:          HostPath (bare host directory volume)
Path:          /sys/fs/cgroup
HostPathType:  
modules:
Type:          HostPath (bare host directory volume)
Path:          /lib/modules
HostPathType:  
bpffs:
Type:          HostPath (bare host directory volume)
Path:          /sys/fs/bpf
HostPathType:  
debugfs:
Type:          HostPath (bare host directory volume)
Path:          /sys/kernel/debug
HostPathType:  
kube-api-access-snrgg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
:NoExecute op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type     Reason     Age   From               Message
----     ------     ----  ----               -------
Normal   Scheduled  10m   default-scheduler  Successfully assigned gadget/gadget-qxcgc to addons-753790
Normal   Pulled     10m   kubelet            Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 5.504s (5.504s including waiting)
Warning  Failed     10m   kubelet            Error: container create failed: time="2023-12-05T19:37:04Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:04Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:04Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:37:04Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  10m  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 280ms (280ms including waiting)
Warning  Failed  10m  kubelet  Error: container create failed: time="2023-12-05T19:37:05Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:05Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:05Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:37:05Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Warning  Failed  10m  kubelet  Error: container create failed: time="2023-12-05T19:37:19Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:19Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:19Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:37:19Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  10m    kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 247ms (247ms including waiting)
Normal   Pulled  9m41s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 271ms (30.215s including waiting)
Warning  Failed  9m41s  kubelet  Error: container create failed: time="2023-12-05T19:38:01Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:01Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:01Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:38:01Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  9m25s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 336ms (2.046s including waiting)
Warning  Failed  9m25s  kubelet  Error: container create failed: time="2023-12-05T19:38:17Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:17Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:17Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:38:17Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  9m13s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 263ms (263ms including waiting)
Warning  Failed  9m13s  kubelet  Error: container create failed: time="2023-12-05T19:38:29Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:29Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:29Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:38:29Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  8m57s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 293ms (293ms including waiting)
Warning  Failed  8m57s  kubelet  Error: container create failed: time="2023-12-05T19:38:45Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:45Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:45Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:38:45Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  8m40s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 259ms (2.692s including waiting)
Warning  Failed  8m40s  kubelet  Error: container create failed: time="2023-12-05T19:39:02Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:39:02Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:39:02Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:39:02Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal  Pulled   5m44s (x12 over 8m11s)  kubelet  (combined from similar events): Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 278ms (278ms including waiting)
Normal  Pulling  40s (x43 over 10m)      kubelet  Pulling image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931"
addons_test.go:837: (dbg) Run:  kubectl --context addons-753790 logs gadget-qxcgc -n gadget
addons_test.go:837: (dbg) Non-zero exit: kubectl --context addons-753790 logs gadget-qxcgc -n gadget: exit status 1 (104.784686ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "gadget" in pod "gadget-qxcgc" is waiting to start: CreateContainerError

                                                
                                                
** /stderr **
addons_test.go:837: kubectl --context addons-753790 logs gadget-qxcgc -n gadget: exit status 1
addons_test.go:838: failed waiting for inspektor-gadget pod: k8s-app=gadget within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-753790
helpers_test.go:235: (dbg) docker inspect addons-753790:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db",
	        "Created": "2023-12-05T19:36:15.156368213Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8824,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-05T19:36:15.531511994Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4e0f3cc6f04c458835e9edb05d52f031520d40521bc3568d81cbb7c06a79ef2",
	        "ResolvConfPath": "/var/lib/docker/containers/9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db/hosts",
	        "LogPath": "/var/lib/docker/containers/9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db/9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db-json.log",
	        "Name": "/addons-753790",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-753790:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-753790",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3e1e23157cc6754d45f70c5c6f2cb6d4745d8cc057f46063f6d561e99db7ffd9-init/diff:/var/lib/docker/overlay2/ad36f68c22d2503e0656ab5d87c276f08a38342a08463cd6653b41bc4f40eea5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e1e23157cc6754d45f70c5c6f2cb6d4745d8cc057f46063f6d561e99db7ffd9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e1e23157cc6754d45f70c5c6f2cb6d4745d8cc057f46063f6d561e99db7ffd9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e1e23157cc6754d45f70c5c6f2cb6d4745d8cc057f46063f6d561e99db7ffd9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-753790",
	                "Source": "/var/lib/docker/volumes/addons-753790/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-753790",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-753790",
	                "name.minikube.sigs.k8s.io": "addons-753790",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "60cc3be46668aefc83a73c6402ade022263c2a9a54aee32a7268835b03965df3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/60cc3be46668",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-753790": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9a7b5170de31",
	                        "addons-753790"
	                    ],
	                    "NetworkID": "f3b232aa44038f4b7212bf899e0f8a0b2f47e0c09f356712e8e7c87ac892de44",
	                    "EndpointID": "cd31adb0fe2e816237bd965e85b85eb556b584348a5a83358190c6a1265e8736",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-753790 -n addons-753790
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-753790 logs -n 25: (1.503442052s)
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-855824   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | -p download-only-855824                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                                                           |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| delete  | -p download-only-855824                                                                     | download-only-855824   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| delete  | -p download-only-855824                                                                     | download-only-855824   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| start   | --download-only -p                                                                          | download-docker-224607 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | download-docker-224607                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-224607                                                                   | download-docker-224607 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-741946   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | binary-mirror-741946                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32795                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-741946                                                                     | binary-mirror-741946   | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| addons  | disable dashboard -p                                                                        | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | addons-753790                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | addons-753790                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-753790 --wait=true                                                                | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-753790 ip                                                                            | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	| addons  | addons-753790 addons disable                                                                | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | -p addons-753790                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-753790 ssh cat                                                                       | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | /opt/local-path-provisioner/pvc-3d274b4a-eada-4209-8083-82421c6fefec_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-753790 addons disable                                                                | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:39 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-753790 addons                                                                        | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:39 UTC | 05 Dec 23 19:39 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-753790 addons                                                                        | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:39 UTC | 05 Dec 23 19:39 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:39 UTC | 05 Dec 23 19:39 UTC |
	|         | -p addons-753790                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:39 UTC | 05 Dec 23 19:39 UTC |
	|         | addons-753790                                                                               |                        |         |         |                     |                     |
	| addons  | addons-753790 addons                                                                        | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:39 UTC | 05 Dec 23 19:39 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-753790 ssh curl -s                                                                   | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:39 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-753790 ip                                                                            | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:42 UTC | 05 Dec 23 19:42 UTC |
	| addons  | addons-753790 addons disable                                                                | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:42 UTC | 05 Dec 23 19:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-753790 addons disable                                                                | addons-753790          | jenkins | v1.32.0 | 05 Dec 23 19:42 UTC | 05 Dec 23 19:42 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:35:51
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:35:51.909842    8344 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:35:51.910009    8344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:51.910046    8344 out.go:309] Setting ErrFile to fd 2...
	I1205 19:35:51.910067    8344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:51.910315    8344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	I1205 19:35:51.910776    8344 out.go:303] Setting JSON to false
	I1205 19:35:51.911510    8344 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1098,"bootTime":1701803854,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 19:35:51.911604    8344 start.go:138] virtualization:  
	I1205 19:35:51.915582    8344 out.go:177] * [addons-753790] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1205 19:35:51.917528    8344 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:35:51.919482    8344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:35:51.917639    8344 notify.go:220] Checking for updates...
	I1205 19:35:51.923449    8344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 19:35:51.925354    8344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 19:35:51.927410    8344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 19:35:51.929186    8344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:35:51.931705    8344 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:35:51.954604    8344 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:35:51.954720    8344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:52.046437    8344 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-05 19:35:52.036331199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:35:52.046533    8344 docker.go:295] overlay module found
	I1205 19:35:52.050156    8344 out.go:177] * Using the docker driver based on user configuration
	I1205 19:35:52.052041    8344 start.go:298] selected driver: docker
	I1205 19:35:52.052059    8344 start.go:902] validating driver "docker" against <nil>
	I1205 19:35:52.052072    8344 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:35:52.052661    8344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:52.125509    8344 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-05 19:35:52.116544252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:35:52.125666    8344 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:35:52.125895    8344 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:35:52.127850    8344 out.go:177] * Using Docker driver with root privileges
	I1205 19:35:52.130019    8344 cni.go:84] Creating CNI manager for ""
	I1205 19:35:52.130037    8344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:35:52.130049    8344 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:35:52.130063    8344 start_flags.go:323] config:
	{Name:addons-753790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-753790 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:52.132303    8344 out.go:177] * Starting control plane node addons-753790 in cluster addons-753790
	I1205 19:35:52.134016    8344 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:35:52.135942    8344 out.go:177] * Pulling base image ...
	I1205 19:35:52.137764    8344 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:35:52.137921    8344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:52.137949    8344 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1205 19:35:52.137960    8344 cache.go:56] Caching tarball of preloaded images
	I1205 19:35:52.138020    8344 preload.go:174] Found /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1205 19:35:52.138036    8344 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 19:35:52.138365    8344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/config.json ...
	I1205 19:35:52.138394    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/config.json: {Name:mkcffab7f9f6129a33892e5ab8934455fae325aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:52.154734    8344 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1205 19:35:52.154856    8344 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1205 19:35:52.154877    8344 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory, skipping pull
	I1205 19:35:52.154883    8344 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in cache, skipping pull
	I1205 19:35:52.154893    8344 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	I1205 19:35:52.154901    8344 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f from local cache
	I1205 19:36:07.569679    8344 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f from cached tarball
	I1205 19:36:07.569718    8344 cache.go:194] Successfully downloaded all kic artifacts
	I1205 19:36:07.569780    8344 start.go:365] acquiring machines lock for addons-753790: {Name:mk0a3aaca0e4c76f2f889d779e8013d626af074e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:36:07.569887    8344 start.go:369] acquired machines lock for "addons-753790" in 83.668µs
	I1205 19:36:07.569917    8344 start.go:93] Provisioning new machine with config: &{Name:addons-753790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-753790 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:36:07.569999    8344 start.go:125] createHost starting for "" (driver="docker")
	I1205 19:36:07.572546    8344 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1205 19:36:07.572775    8344 start.go:159] libmachine.API.Create for "addons-753790" (driver="docker")
	I1205 19:36:07.572822    8344 client.go:168] LocalClient.Create starting
	I1205 19:36:07.572915    8344 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem
	I1205 19:36:08.280514    8344 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem
	I1205 19:36:08.423701    8344 cli_runner.go:164] Run: docker network inspect addons-753790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 19:36:08.441711    8344 cli_runner.go:211] docker network inspect addons-753790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 19:36:08.441785    8344 network_create.go:281] running [docker network inspect addons-753790] to gather additional debugging logs...
	I1205 19:36:08.441804    8344 cli_runner.go:164] Run: docker network inspect addons-753790
	W1205 19:36:08.458312    8344 cli_runner.go:211] docker network inspect addons-753790 returned with exit code 1
	I1205 19:36:08.458347    8344 network_create.go:284] error running [docker network inspect addons-753790]: docker network inspect addons-753790: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-753790 not found
	I1205 19:36:08.458360    8344 network_create.go:286] output of [docker network inspect addons-753790]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-753790 not found
	
	** /stderr **
	I1205 19:36:08.458472    8344 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:36:08.475034    8344 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025835a0}
	I1205 19:36:08.475069    8344 network_create.go:124] attempt to create docker network addons-753790 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 19:36:08.475125    8344 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-753790 addons-753790
	I1205 19:36:08.537915    8344 network_create.go:108] docker network addons-753790 192.168.49.0/24 created
	I1205 19:36:08.537947    8344 kic.go:121] calculated static IP "192.168.49.2" for the "addons-753790" container
	I1205 19:36:08.538027    8344 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 19:36:08.555159    8344 cli_runner.go:164] Run: docker volume create addons-753790 --label name.minikube.sigs.k8s.io=addons-753790 --label created_by.minikube.sigs.k8s.io=true
	I1205 19:36:08.574930    8344 oci.go:103] Successfully created a docker volume addons-753790
	I1205 19:36:08.575012    8344 cli_runner.go:164] Run: docker run --rm --name addons-753790-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-753790 --entrypoint /usr/bin/test -v addons-753790:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib
	I1205 19:36:10.808501    8344 cli_runner.go:217] Completed: docker run --rm --name addons-753790-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-753790 --entrypoint /usr/bin/test -v addons-753790:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib: (2.23344409s)
	I1205 19:36:10.808530    8344 oci.go:107] Successfully prepared a docker volume addons-753790
	I1205 19:36:10.808561    8344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:36:10.808583    8344 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 19:36:10.808664    8344 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-753790:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 19:36:15.043741    8344 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-753790:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir: (4.235039179s)
	I1205 19:36:15.043785    8344 kic.go:203] duration metric: took 4.235201 seconds to extract preloaded images to volume
	W1205 19:36:15.043941    8344 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 19:36:15.044126    8344 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 19:36:15.140282    8344 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-753790 --name addons-753790 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-753790 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-753790 --network addons-753790 --ip 192.168.49.2 --volume addons-753790:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 19:36:15.539910    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Running}}
	I1205 19:36:15.562980    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:15.591613    8344 cli_runner.go:164] Run: docker exec addons-753790 stat /var/lib/dpkg/alternatives/iptables
	I1205 19:36:15.662765    8344 oci.go:144] the created container "addons-753790" has a running status.
	I1205 19:36:15.662793    8344 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa...
	I1205 19:36:16.048771    8344 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 19:36:16.085504    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:16.115898    8344 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 19:36:16.115918    8344 kic_runner.go:114] Args: [docker exec --privileged addons-753790 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 19:36:16.199036    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:16.234930    8344 machine.go:88] provisioning docker machine ...
	I1205 19:36:16.234969    8344 ubuntu.go:169] provisioning hostname "addons-753790"
	I1205 19:36:16.235028    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:16.268003    8344 main.go:141] libmachine: Using SSH client type: native
	I1205 19:36:16.268417    8344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1205 19:36:16.268436    8344 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-753790 && echo "addons-753790" | sudo tee /etc/hostname
	I1205 19:36:16.271388    8344 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51844->127.0.0.1:32772: read: connection reset by peer
	I1205 19:36:19.437721    8344 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-753790
	
	I1205 19:36:19.437803    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:19.456804    8344 main.go:141] libmachine: Using SSH client type: native
	I1205 19:36:19.457216    8344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1205 19:36:19.457239    8344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-753790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-753790/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-753790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:36:19.604611    8344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:36:19.604635    8344 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-2478/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-2478/.minikube}
	I1205 19:36:19.604664    8344 ubuntu.go:177] setting up certificates
	I1205 19:36:19.604673    8344 provision.go:83] configureAuth start
	I1205 19:36:19.604737    8344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-753790
	I1205 19:36:19.623116    8344 provision.go:138] copyHostCerts
	I1205 19:36:19.623198    8344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem (1123 bytes)
	I1205 19:36:19.623311    8344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem (1679 bytes)
	I1205 19:36:19.623388    8344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem (1078 bytes)
	I1205 19:36:19.623441    8344 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem org=jenkins.addons-753790 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-753790]
	I1205 19:36:20.535186    8344 provision.go:172] copyRemoteCerts
	I1205 19:36:20.535274    8344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:36:20.535318    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:20.553184    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:20.657812    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:36:20.684920    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 19:36:20.713418    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:36:20.740255    8344 provision.go:86] duration metric: configureAuth took 1.1355691s
	I1205 19:36:20.740279    8344 ubuntu.go:193] setting minikube options for container-runtime
	I1205 19:36:20.740470    8344 config.go:182] Loaded profile config "addons-753790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:20.740577    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:20.758917    8344 main.go:141] libmachine: Using SSH client type: native
	I1205 19:36:20.759320    8344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1205 19:36:20.759341    8344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:36:21.029951    8344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:36:21.029976    8344 machine.go:91] provisioned docker machine in 4.795023381s
	I1205 19:36:21.029986    8344 client.go:171] LocalClient.Create took 13.457153178s
	I1205 19:36:21.029999    8344 start.go:167] duration metric: libmachine.API.Create for "addons-753790" took 13.457223356s
	I1205 19:36:21.030007    8344 start.go:300] post-start starting for "addons-753790" (driver="docker")
	I1205 19:36:21.030016    8344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:36:21.030080    8344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:36:21.030124    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:21.053331    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:21.158289    8344 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:36:21.162257    8344 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 19:36:21.162292    8344 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 19:36:21.162303    8344 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 19:36:21.162316    8344 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1205 19:36:21.162326    8344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/addons for local assets ...
	I1205 19:36:21.162395    8344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/files for local assets ...
	I1205 19:36:21.162422    8344 start.go:303] post-start completed in 132.409818ms
	I1205 19:36:21.162718    8344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-753790
	I1205 19:36:21.179829    8344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/config.json ...
	I1205 19:36:21.180095    8344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:36:21.180158    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:21.200386    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:21.301542    8344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 19:36:21.306886    8344 start.go:128] duration metric: createHost completed in 13.736873202s
	I1205 19:36:21.306907    8344 start.go:83] releasing machines lock for "addons-753790", held for 13.737006948s
	I1205 19:36:21.306981    8344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-753790
	I1205 19:36:21.324082    8344 ssh_runner.go:195] Run: cat /version.json
	I1205 19:36:21.324134    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:21.324201    8344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:36:21.324266    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:21.342929    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:21.360561    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:21.575511    8344 ssh_runner.go:195] Run: systemctl --version
	I1205 19:36:21.580777    8344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:36:21.725908    8344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 19:36:21.731276    8344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:36:21.753568    8344 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 19:36:21.753646    8344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:36:21.785870    8344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 19:36:21.785897    8344 start.go:475] detecting cgroup driver to use...
	I1205 19:36:21.785928    8344 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 19:36:21.785978    8344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:36:21.803586    8344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:36:21.816451    8344 docker.go:203] disabling cri-docker service (if available) ...
	I1205 19:36:21.816551    8344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:36:21.832124    8344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:36:21.848559    8344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:36:21.945385    8344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:36:22.041922    8344 docker.go:219] disabling docker service ...
	I1205 19:36:22.042028    8344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:36:22.062253    8344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:36:22.075461    8344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:36:22.165581    8344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:36:22.266853    8344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:36:22.279179    8344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:36:22.297631    8344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 19:36:22.297696    8344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:22.308732    8344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:36:22.308793    8344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:22.319615    8344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:22.330573    8344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:22.341251    8344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:36:22.351413    8344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:36:22.360976    8344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:36:22.370263    8344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:36:22.454601    8344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:36:22.566535    8344 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:36:22.566610    8344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:36:22.571088    8344 start.go:543] Will wait 60s for crictl version
	I1205 19:36:22.571144    8344 ssh_runner.go:195] Run: which crictl
	I1205 19:36:22.575134    8344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:36:22.613738    8344 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 19:36:22.613865    8344 ssh_runner.go:195] Run: crio --version
	I1205 19:36:22.659470    8344 ssh_runner.go:195] Run: crio --version
	I1205 19:36:22.709825    8344 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1205 19:36:22.712147    8344 cli_runner.go:164] Run: docker network inspect addons-753790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:36:22.729047    8344 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 19:36:22.733374    8344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:36:22.746060    8344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:36:22.746127    8344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:36:22.811655    8344 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 19:36:22.811676    8344 crio.go:415] Images already preloaded, skipping extraction
	I1205 19:36:22.811730    8344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:36:22.851094    8344 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 19:36:22.851117    8344 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:36:22.851188    8344 ssh_runner.go:195] Run: crio config
	I1205 19:36:22.919605    8344 cni.go:84] Creating CNI manager for ""
	I1205 19:36:22.919625    8344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:36:22.919670    8344 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 19:36:22.919696    8344 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-753790 NodeName:addons-753790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:36:22.919881    8344 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-753790"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:36:22.919960    8344 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-753790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-753790 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 19:36:22.920043    8344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 19:36:22.930121    8344 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:36:22.930225    8344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:36:22.940016    8344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1205 19:36:22.959618    8344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:36:22.979784    8344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1205 19:36:22.999258    8344 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 19:36:23.003544    8344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:36:23.016180    8344 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790 for IP: 192.168.49.2
	I1205 19:36:23.016209    8344 certs.go:190] acquiring lock for shared ca certs: {Name:mk8ef93a51958e82275f202c3866b092b6aa4ced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:23.016349    8344 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key
	I1205 19:36:23.389384    8344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt ...
	I1205 19:36:23.389410    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt: {Name:mk6803fcf95b12ed9d9ed71b2ebfb52226bf7c74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:23.389609    8344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key ...
	I1205 19:36:23.389623    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key: {Name:mkf92cda3b17c7b2bc3ea5041c219bff8618a437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:23.389708    8344 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key
	I1205 19:36:24.172249    8344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.crt ...
	I1205 19:36:24.172277    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.crt: {Name:mkf1ad06a6ca45c538781f7e4d8156ae9ea85689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.172453    8344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key ...
	I1205 19:36:24.172466    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key: {Name:mk976d49e1c41f0b574101fa3b655a03410a7360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.172578    8344 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.key
	I1205 19:36:24.172594    8344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt with IP's: []
	I1205 19:36:24.292230    8344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt ...
	I1205 19:36:24.292256    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: {Name:mkd9b024028d488e95b01d4658c8d526a9df083f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.292434    8344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.key ...
	I1205 19:36:24.292449    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.key: {Name:mk131c0bce6aa9cc9a0c7550e2f58984bfefb7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.292530    8344 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.key.dd3b5fb2
	I1205 19:36:24.292551    8344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 19:36:24.453793    8344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.crt.dd3b5fb2 ...
	I1205 19:36:24.453819    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.crt.dd3b5fb2: {Name:mkd96a5ba477f7ac61b1220d340ee67fbb940da6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.453987    8344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.key.dd3b5fb2 ...
	I1205 19:36:24.454001    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.key.dd3b5fb2: {Name:mk711b2e84591f91a1f001e8b533ea6bab25c4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.454080    8344 certs.go:337] copying /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.crt
	I1205 19:36:24.454152    8344 certs.go:341] copying /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.key
	I1205 19:36:24.454203    8344 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.key
	I1205 19:36:24.454221    8344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.crt with IP's: []
	I1205 19:36:24.902717    8344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.crt ...
	I1205 19:36:24.902747    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.crt: {Name:mkcba8d9fa774f098c79875bed9c742ae22282fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.902919    8344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.key ...
	I1205 19:36:24.902931    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.key: {Name:mk0d46cbd2a8515c1022cefd060b5673f2a88244 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:24.903113    8344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:36:24.903153    8344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:36:24.903182    8344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:36:24.903212    8344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem (1679 bytes)
	I1205 19:36:24.903850    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 19:36:24.930857    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:36:24.958111    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:36:24.984790    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 19:36:25.012074    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:36:25.040546    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 19:36:25.068236    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:36:25.096819    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 19:36:25.123570    8344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:36:25.150914    8344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:36:25.170992    8344 ssh_runner.go:195] Run: openssl version
	I1205 19:36:25.177662    8344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:36:25.188661    8344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:36:25.193078    8344 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:36:25.193170    8344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:36:25.201115    8344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:36:25.211965    8344 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 19:36:25.216072    8344 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 19:36:25.216156    8344 kubeadm.go:404] StartCluster: {Name:addons-753790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-753790 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:36:25.216246    8344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:36:25.216337    8344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:36:25.256960    8344 cri.go:89] found id: ""
	I1205 19:36:25.257062    8344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:36:25.267392    8344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:36:25.277574    8344 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1205 19:36:25.277662    8344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:36:25.287637    8344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:36:25.287713    8344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 19:36:25.337992    8344 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 19:36:25.338274    8344 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 19:36:25.390401    8344 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1205 19:36:25.390472    8344 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1205 19:36:25.390511    8344 kubeadm.go:322] OS: Linux
	I1205 19:36:25.390569    8344 kubeadm.go:322] CGROUPS_CPU: enabled
	I1205 19:36:25.390628    8344 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1205 19:36:25.390684    8344 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1205 19:36:25.390735    8344 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1205 19:36:25.390786    8344 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1205 19:36:25.390845    8344 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1205 19:36:25.390892    8344 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1205 19:36:25.390945    8344 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1205 19:36:25.390992    8344 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1205 19:36:25.469216    8344 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:36:25.469323    8344 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:36:25.469415    8344 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 19:36:25.719941    8344 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:36:25.723291    8344 out.go:204]   - Generating certificates and keys ...
	I1205 19:36:25.723416    8344 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 19:36:25.723497    8344 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 19:36:26.279689    8344 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:36:26.396288    8344 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:36:27.615626    8344 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:36:27.934993    8344 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 19:36:28.112549    8344 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 19:36:28.112927    8344 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-753790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:36:28.264453    8344 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 19:36:28.264832    8344 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-753790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:36:28.585575    8344 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:36:28.952969    8344 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:36:29.084120    8344 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 19:36:29.084478    8344 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:36:29.577633    8344 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:36:31.045477    8344 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:36:31.640178    8344 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:36:31.953198    8344 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:36:31.954279    8344 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:36:31.957560    8344 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:36:31.960020    8344 out.go:204]   - Booting up control plane ...
	I1205 19:36:31.960144    8344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:36:31.960217    8344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:36:31.961219    8344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:36:31.970896    8344 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:36:31.971948    8344 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:36:31.972197    8344 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 19:36:32.059165    8344 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 19:36:39.061230    8344 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002108 seconds
	I1205 19:36:39.061348    8344 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:36:39.092658    8344 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:36:39.618390    8344 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:36:39.618572    8344 kubeadm.go:322] [mark-control-plane] Marking the node addons-753790 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:36:40.130227    8344 kubeadm.go:322] [bootstrap-token] Using token: idz0tv.fy35j0upqrlrbzb1
	I1205 19:36:40.132194    8344 out.go:204]   - Configuring RBAC rules ...
	I1205 19:36:40.132311    8344 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:36:40.138543    8344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:36:40.146147    8344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:36:40.149501    8344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:36:40.152720    8344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:36:40.157007    8344 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:36:40.172376    8344 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:36:40.409341    8344 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 19:36:40.559086    8344 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 19:36:40.559103    8344 kubeadm.go:322] 
	I1205 19:36:40.559160    8344 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 19:36:40.559165    8344 kubeadm.go:322] 
	I1205 19:36:40.559236    8344 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 19:36:40.559242    8344 kubeadm.go:322] 
	I1205 19:36:40.559266    8344 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 19:36:40.559321    8344 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:36:40.559368    8344 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:36:40.559373    8344 kubeadm.go:322] 
	I1205 19:36:40.559423    8344 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 19:36:40.559428    8344 kubeadm.go:322] 
	I1205 19:36:40.559472    8344 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:36:40.559477    8344 kubeadm.go:322] 
	I1205 19:36:40.559525    8344 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 19:36:40.559596    8344 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:36:40.559667    8344 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:36:40.559673    8344 kubeadm.go:322] 
	I1205 19:36:40.559750    8344 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:36:40.559834    8344 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 19:36:40.559841    8344 kubeadm.go:322] 
	I1205 19:36:40.559920    8344 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token idz0tv.fy35j0upqrlrbzb1 \
	I1205 19:36:40.560016    8344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 \
	I1205 19:36:40.560035    8344 kubeadm.go:322] 	--control-plane 
	I1205 19:36:40.560039    8344 kubeadm.go:322] 
	I1205 19:36:40.560118    8344 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:36:40.560123    8344 kubeadm.go:322] 
	I1205 19:36:40.560199    8344 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token idz0tv.fy35j0upqrlrbzb1 \
	I1205 19:36:40.560294    8344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 
	I1205 19:36:40.563482    8344 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1205 19:36:40.563590    8344 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:36:40.563604    8344 cni.go:84] Creating CNI manager for ""
	I1205 19:36:40.563611    8344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:36:40.567143    8344 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:36:40.569084    8344 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:36:40.585046    8344 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 19:36:40.585064    8344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 19:36:40.640240    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 19:36:41.480024    8344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:36:41.480156    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:41.480229    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=addons-753790 minikube.k8s.io/updated_at=2023_12_05T19_36_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:41.659525    8344 ops.go:34] apiserver oom_adj: -16
	I1205 19:36:41.659606    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:41.754575    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:42.345980    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:42.845792    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:43.345697    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:43.845314    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:44.345894    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:44.845442    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:45.345555    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:45.846208    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:46.345742    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:46.845337    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:47.345379    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:47.845978    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:48.345279    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:48.845413    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:49.345811    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:49.845650    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:50.345864    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:50.845611    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:51.345825    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:51.846171    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:52.345984    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:52.845732    8344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:53.015971    8344 kubeadm.go:1088] duration metric: took 11.535858434s to wait for elevateKubeSystemPrivileges.
	I1205 19:36:53.015995    8344 kubeadm.go:406] StartCluster complete in 27.799841691s
	I1205 19:36:53.016011    8344 settings.go:142] acquiring lock: {Name:mk9158e056caaf62837361622cedbf37e18c3f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:53.016119    8344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 19:36:53.016494    8344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/kubeconfig: {Name:mka2e3e3347ae085678ba2bb20225628c9c86ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:53.016766    8344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:36:53.016791    8344 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1205 19:36:53.016882    8344 addons.go:69] Setting volumesnapshots=true in profile "addons-753790"
	I1205 19:36:53.016900    8344 addons.go:231] Setting addon volumesnapshots=true in "addons-753790"
	I1205 19:36:53.016956    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.017025    8344 config.go:182] Loaded profile config "addons-753790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:53.017069    8344 addons.go:69] Setting ingress-dns=true in profile "addons-753790"
	I1205 19:36:53.017080    8344 addons.go:231] Setting addon ingress-dns=true in "addons-753790"
	I1205 19:36:53.017126    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.017411    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.017503    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.017905    8344 addons.go:69] Setting inspektor-gadget=true in profile "addons-753790"
	I1205 19:36:53.017926    8344 addons.go:231] Setting addon inspektor-gadget=true in "addons-753790"
	I1205 19:36:53.017964    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.018357    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.018462    8344 addons.go:69] Setting cloud-spanner=true in profile "addons-753790"
	I1205 19:36:53.018474    8344 addons.go:231] Setting addon cloud-spanner=true in "addons-753790"
	I1205 19:36:53.018508    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.018892    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.021226    8344 addons.go:69] Setting metrics-server=true in profile "addons-753790"
	I1205 19:36:53.021252    8344 addons.go:231] Setting addon metrics-server=true in "addons-753790"
	I1205 19:36:53.021293    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.021703    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.023633    8344 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-753790"
	I1205 19:36:53.023684    8344 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-753790"
	I1205 19:36:53.023721    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.024155    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.034230    8344 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-753790"
	I1205 19:36:53.034310    8344 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-753790"
	I1205 19:36:53.034397    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.034926    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.051724    8344 addons.go:69] Setting registry=true in profile "addons-753790"
	I1205 19:36:53.051832    8344 addons.go:231] Setting addon registry=true in "addons-753790"
	I1205 19:36:53.051909    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.051930    8344 addons.go:69] Setting gcp-auth=true in profile "addons-753790"
	I1205 19:36:53.051958    8344 mustload.go:65] Loading cluster: addons-753790
	I1205 19:36:53.052153    8344 config.go:182] Loaded profile config "addons-753790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:53.052387    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.052499    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.066942    8344 addons.go:69] Setting ingress=true in profile "addons-753790"
	I1205 19:36:53.066977    8344 addons.go:231] Setting addon ingress=true in "addons-753790"
	I1205 19:36:53.067040    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.067530    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.051921    8344 addons.go:69] Setting default-storageclass=true in profile "addons-753790"
	I1205 19:36:53.071013    8344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-753790"
	I1205 19:36:53.198610    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.071052    8344 addons.go:69] Setting storage-provisioner=true in profile "addons-753790"
	I1205 19:36:53.238605    8344 addons.go:231] Setting addon storage-provisioner=true in "addons-753790"
	I1205 19:36:53.238701    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.239169    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.258520    8344 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1205 19:36:53.286505    8344 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1205 19:36:53.071064    8344 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-753790"
	I1205 19:36:53.277327    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.288143    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 19:36:53.288176    8344 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-753790"
	I1205 19:36:53.290082    8344 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1205 19:36:53.290089    8344 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1205 19:36:53.299007    8344 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:36:53.299872    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 19:36:53.299928    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.300376    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.311837    8344 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 19:36:53.311862    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 19:36:53.311911    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.314062    8344 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 19:36:53.314077    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 19:36:53.314131    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.324855    8344 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-753790" context rescaled to 1 replicas
	I1205 19:36:53.324898    8344 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:36:53.327090    8344 out.go:177] * Verifying Kubernetes components...
	I1205 19:36:53.299839    8344 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1205 19:36:53.299791    8344 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1205 19:36:53.299799    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 19:36:53.331980    8344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:36:53.331987    8344 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1205 19:36:53.331994    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1205 19:36:53.333104    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.336369    8344 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1205 19:36:53.341218    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 19:36:53.341278    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 19:36:53.349446    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.351573    8344 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:53.349753    8344 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:36:53.370624    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 19:36:53.353882    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 19:36:53.365834    8344 addons.go:231] Setting addon default-storageclass=true in "addons-753790"
	I1205 19:36:53.381945    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.382452    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.387024    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 19:36:53.392558    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 19:36:53.399848    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 19:36:53.401925    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 19:36:53.401190    8344 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:53.401259    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.406414    8344 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 19:36:53.404178    8344 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 19:36:53.410629    8344 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1205 19:36:53.408557    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 19:36:53.408817    8344 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:36:53.412685    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1205 19:36:53.412783    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.415188    8344 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 19:36:53.415206    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1205 19:36:53.415296    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.431591    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 19:36:53.431662    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.454382    8344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:36:53.455384    8344 node_ready.go:35] waiting up to 6m0s for node "addons-753790" to be "Ready" ...
	I1205 19:36:53.471641    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.486840    8344 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-753790"
	I1205 19:36:53.486881    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:36:53.487323    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:36:53.538274    8344 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:36:53.533995    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.544237    8344 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:36:53.544263    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:36:53.544327    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.562793    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.577784    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.593529    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.634984    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.638075    8344 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:36:53.638096    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:36:53.638153    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.643657    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.705147    8344 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 19:36:53.700262    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.702618    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.711826    8344 out.go:177]   - Using image docker.io/busybox:stable
	I1205 19:36:53.718432    8344 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:36:53.718452    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 19:36:53.718515    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:36:53.718927    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.729152    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.751994    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:36:53.974061    8344 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1205 19:36:53.974084    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1205 19:36:54.010474    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:36:54.010978    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 19:36:54.039576    8344 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1205 19:36:54.039601    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1205 19:36:54.130509    8344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 19:36:54.130555    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 19:36:54.136106    8344 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1205 19:36:54.136127    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1205 19:36:54.139680    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 19:36:54.139699    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 19:36:54.143183    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:36:54.148826    8344 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 19:36:54.148847    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 19:36:54.154199    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:36:54.227614    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:36:54.228402    8344 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 19:36:54.228420    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 19:36:54.234902    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:36:54.238716    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:36:54.305305    8344 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 19:36:54.305334    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 19:36:54.310173    8344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 19:36:54.310191    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 19:36:54.316836    8344 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1205 19:36:54.316857    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1205 19:36:54.355596    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 19:36:54.355666    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 19:36:54.406360    8344 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:36:54.406425    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 19:36:54.446485    8344 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 19:36:54.446554    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 19:36:54.469303    8344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:36:54.469373    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 19:36:54.495002    8344 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1205 19:36:54.495070    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1205 19:36:54.534300    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 19:36:54.534369    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 19:36:54.603000    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:36:54.613746    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 19:36:54.613811    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 19:36:54.657897    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:36:54.703387    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 19:36:54.703455    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 19:36:54.705863    8344 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 19:36:54.705910    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1205 19:36:54.794003    8344 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:54.794064    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 19:36:54.850759    8344 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 19:36:54.850819    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 19:36:54.901870    8344 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1205 19:36:54.901937    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1205 19:36:54.915538    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:54.960643    8344 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 19:36:54.960710    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 19:36:55.053432    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1205 19:36:55.084367    8344 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 19:36:55.084439    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 19:36:55.126777    8344 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.672362614s)
	I1205 19:36:55.126866    8344 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 19:36:55.227306    8344 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 19:36:55.227376    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 19:36:55.417582    8344 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 19:36:55.417649    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 19:36:55.582416    8344 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:36:55.582486    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 19:36:55.689677    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:36:55.812263    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:36:57.687970    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.67745176s)
	I1205 19:36:57.688027    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.677030226s)
	I1205 19:36:57.688060    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.544857618s)
	I1205 19:36:57.688220    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.534001062s)
	I1205 19:36:58.100077    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:36:58.127272    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.899529492s)
	I1205 19:36:58.192577    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.957641134s)
	I1205 19:36:58.834173    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.595408762s)
	I1205 19:36:58.834205    8344 addons.go:467] Verifying addon ingress=true in "addons-753790"
	I1205 19:36:58.834279    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.231213219s)
	I1205 19:36:58.834296    8344 addons.go:467] Verifying addon registry=true in "addons-753790"
	I1205 19:36:58.836914    8344 out.go:177] * Verifying ingress addon...
	I1205 19:36:58.834699    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.176720141s)
	I1205 19:36:58.834803    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.919200328s)
	I1205 19:36:58.834849    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.781348647s)
	I1205 19:36:58.838957    8344 out.go:177] * Verifying registry addon...
	I1205 19:36:58.841981    8344 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 19:36:58.839195    8344 addons.go:467] Verifying addon metrics-server=true in "addons-753790"
	W1205 19:36:58.839219    8344 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:36:58.842156    8344 retry.go:31] will retry after 262.635693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:36:58.839974    8344 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 19:36:58.849967    8344 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 19:36:58.849993    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:58.854209    8344 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:36:58.854231    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:58.859196    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:58.862650    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.105294    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:59.135379    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.445605641s)
	I1205 19:36:59.135426    8344 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-753790"
	I1205 19:36:59.137629    8344 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 19:36:59.141242    8344 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 19:36:59.150907    8344 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:36:59.150928    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.160832    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.367272    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.385771    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.672627    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.863955    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.867451    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.109246    8344 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 19:37:00.109346    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:37:00.143350    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:37:00.165717    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:00.366015    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:00.378185    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.406114    8344 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 19:37:00.429133    8344 addons.go:231] Setting addon gcp-auth=true in "addons-753790"
	I1205 19:37:00.429188    8344 host.go:66] Checking if "addons-753790" exists ...
	I1205 19:37:00.429683    8344 cli_runner.go:164] Run: docker container inspect addons-753790 --format={{.State.Status}}
	I1205 19:37:00.450680    8344 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 19:37:00.450734    8344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-753790
	I1205 19:37:00.491916    8344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/addons-753790/id_rsa Username:docker}
	I1205 19:37:00.576724    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:00.674459    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:00.762291    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.656954891s)
	I1205 19:37:00.765913    8344 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:37:00.768092    8344 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1205 19:37:00.770074    8344 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 19:37:00.770096    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 19:37:00.846964    8344 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 19:37:00.846990    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 19:37:00.866449    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:00.870852    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.908822    8344 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:37:00.908844    8344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1205 19:37:00.962578    8344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:37:01.178048    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:01.382728    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:01.383613    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:01.666352    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:01.864124    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:01.873663    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:02.189547    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:02.264409    8344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.301791802s)
	I1205 19:37:02.267258    8344 addons.go:467] Verifying addon gcp-auth=true in "addons-753790"
	I1205 19:37:02.271081    8344 out.go:177] * Verifying gcp-auth addon...
	I1205 19:37:02.276105    8344 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 19:37:02.297906    8344 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 19:37:02.297930    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.306026    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.363885    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:02.367229    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:02.665814    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:02.810351    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.865131    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:02.872830    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:03.051178    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:03.168531    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:03.311377    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:03.364762    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:03.368706    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:03.667098    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:03.810664    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:03.863256    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:03.866790    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:04.165802    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:04.311362    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:04.364740    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:04.366435    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:04.665562    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:04.809472    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:04.863361    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:04.869107    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:05.051514    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:05.165251    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.309600    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:05.362996    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:05.366278    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:05.665972    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.809280    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:05.863679    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:05.866226    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.165236    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:06.309564    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:06.363048    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:06.367440    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.665572    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:06.809866    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:06.863245    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:06.866062    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:07.165176    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:07.309627    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:07.363406    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:07.366226    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:07.551042    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:07.665356    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:07.809782    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:07.863683    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:07.866894    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.165063    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:08.309370    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.363372    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.366762    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.665060    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:08.810218    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.863225    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.866158    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:09.165303    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:09.309697    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:09.363339    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:09.366447    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:09.551347    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:09.664929    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:09.810103    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:09.864035    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:09.867062    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:10.165171    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:10.309618    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.363077    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:10.366117    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:10.665246    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:10.809547    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.863615    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:10.867034    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:11.165237    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:11.309582    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:11.363430    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:11.366267    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:11.551627    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:11.665324    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:11.810111    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:11.864045    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:11.866036    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:12.166450    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:12.309366    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:12.365035    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:12.366919    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:12.665049    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:12.809890    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:12.863914    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:12.867002    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:13.165485    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:13.309278    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.363236    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.366468    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:13.551892    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:13.665247    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:13.809675    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.863618    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.866986    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:14.165401    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.309731    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:14.364093    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:14.366686    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:14.664881    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.809138    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:14.863720    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:14.865908    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:15.165933    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:15.309915    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:15.364086    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:15.366129    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:15.554425    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:15.665699    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:15.809454    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:15.864130    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:15.867071    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:16.165310    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:16.309994    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:16.363680    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:16.366522    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:16.665719    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:16.810082    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:16.863699    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:16.867100    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:17.165450    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:17.309255    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:17.364016    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.365861    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:17.665270    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:17.809872    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:17.866544    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.866948    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:18.054449    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:18.165879    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:18.309369    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:18.364262    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:18.366224    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:18.665532    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:18.809358    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:18.864200    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:18.866758    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:19.164811    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:19.309859    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.363198    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:19.366257    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:19.665775    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:19.815896    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.863175    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:19.866267    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:20.165905    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:20.309676    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:20.363825    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:20.366806    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:20.551910    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:20.665282    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:20.810076    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:20.864832    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:20.866556    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:21.164985    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:21.309757    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:21.363171    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:21.366273    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:21.665706    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:21.809831    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:21.863833    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:21.865955    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:22.165638    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:22.309706    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:22.363510    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:22.366838    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:22.664850    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:22.809783    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:22.863411    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:22.866558    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:23.051527    8344 node_ready.go:58] node "addons-753790" has status "Ready":"False"
	I1205 19:37:23.165721    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:23.309622    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:23.363775    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:23.365842    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:23.680285    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:23.812433    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:23.868045    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:23.868881    8344 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:37:23.868924    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:24.071095    8344 node_ready.go:49] node "addons-753790" has status "Ready":"True"
	I1205 19:37:24.071158    8344 node_ready.go:38] duration metric: took 30.615748861s waiting for node "addons-753790" to be "Ready" ...
	I1205 19:37:24.071182    8344 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:37:24.091604    8344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rmhkn" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:24.173378    8344 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:37:24.173443    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:24.310559    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:24.368571    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:24.371396    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:24.667349    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:24.812560    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:24.871032    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:24.871976    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:25.167256    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:25.310837    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:25.373707    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:25.374682    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:25.667096    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:25.816105    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:25.865297    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:25.872464    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:26.125615    8344 pod_ready.go:92] pod "coredns-5dd5756b68-rmhkn" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:26.125691    8344 pod_ready.go:81] duration metric: took 2.033978619s waiting for pod "coredns-5dd5756b68-rmhkn" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.125727    8344 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.133633    8344 pod_ready.go:92] pod "etcd-addons-753790" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:26.133708    8344 pod_ready.go:81] duration metric: took 7.946161ms waiting for pod "etcd-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.133748    8344 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.142849    8344 pod_ready.go:92] pod "kube-apiserver-addons-753790" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:26.142922    8344 pod_ready.go:81] duration metric: took 9.149041ms waiting for pod "kube-apiserver-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.142947    8344 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.153050    8344 pod_ready.go:92] pod "kube-controller-manager-addons-753790" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:26.153115    8344 pod_ready.go:81] duration metric: took 10.148188ms waiting for pod "kube-controller-manager-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.153157    8344 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8xqms" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.167305    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:26.309355    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:26.363340    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:26.367361    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:26.452623    8344 pod_ready.go:92] pod "kube-proxy-8xqms" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:26.452652    8344 pod_ready.go:81] duration metric: took 299.47191ms waiting for pod "kube-proxy-8xqms" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.452663    8344 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.667041    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:26.816414    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:26.854672    8344 pod_ready.go:92] pod "kube-scheduler-addons-753790" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:26.854696    8344 pod_ready.go:81] duration metric: took 402.024647ms waiting for pod "kube-scheduler-addons-753790" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.854707    8344 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:26.865690    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:26.870437    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:27.166835    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:27.309874    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:27.363397    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:27.367502    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:27.666402    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:27.815215    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:27.863372    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:27.866992    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:28.165895    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:28.310378    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:28.364918    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:28.372439    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:28.669691    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:28.811351    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:28.864735    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:28.869161    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:29.160005    8344 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:29.166263    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:29.310220    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:29.364383    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:29.368852    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:29.667781    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:29.811272    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:29.863795    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:29.867789    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:30.167992    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:30.310310    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:30.364582    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:30.370500    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:30.670421    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:30.810602    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:30.867214    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:30.870356    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:31.189973    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:31.311068    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:31.366057    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:31.367871    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:31.660426    8344 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:31.669837    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:31.811238    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:31.865726    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:31.869712    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:32.167000    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:32.309658    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:32.371405    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:32.374362    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:32.677532    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:32.809964    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:32.869913    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:32.871278    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:33.166242    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:33.309582    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:33.363678    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:33.366628    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:33.666477    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:33.810516    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:33.864076    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:33.868633    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:34.161114    8344 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:34.166154    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:34.309737    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:34.366296    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:34.376275    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:34.680205    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:34.809523    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:34.870180    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:34.871078    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:35.167936    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:35.309664    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:35.367454    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:35.370534    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:35.666903    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:35.810571    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:35.881740    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:35.884869    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:36.166803    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:36.310593    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:36.365601    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:36.369302    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:36.659151    8344 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:36.666599    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:36.809816    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:36.863956    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:36.866635    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:37.166543    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:37.309898    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:37.363593    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:37.367552    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:37.666983    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:37.810036    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:37.869435    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:37.873258    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:38.159293    8344 pod_ready.go:92] pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:38.159367    8344 pod_ready.go:81] duration metric: took 11.304651809s waiting for pod "metrics-server-7c66d45ddc-5nn9m" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:38.159391    8344 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:38.173730    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:38.311237    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:38.364698    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:38.373635    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:38.670788    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:38.809756    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:38.878545    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:38.904386    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:39.167532    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:39.310019    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:39.373175    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:39.377134    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:39.675805    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:39.810568    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:39.865667    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:39.870451    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:40.169613    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:40.188918    8344 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:40.316451    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:40.363750    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:40.373322    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:40.667724    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:40.810217    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:40.870747    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:40.871225    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:41.166994    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:41.310730    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:41.364245    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:41.369963    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:41.667172    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:41.810633    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:41.866295    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:41.874045    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:42.168285    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:42.309918    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:42.364284    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:42.367164    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:42.667006    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:42.685062    8344 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:42.810195    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:42.878356    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:42.879340    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:43.168969    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:43.310800    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:43.365741    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:43.370879    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:43.667509    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:43.811324    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:43.869858    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:43.872112    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:44.166296    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:44.309352    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:44.364301    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:44.366862    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:44.667105    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:44.810169    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:44.867293    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:44.869438    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:45.166829    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:45.187382    8344 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:45.310635    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:45.363921    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:45.366765    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:45.666757    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:45.809542    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:45.865362    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:45.870611    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:46.166438    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:46.310853    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:46.368764    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:46.376698    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:46.667584    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:46.810333    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:46.865992    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:46.870832    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:47.166977    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:47.196126    8344 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:47.310177    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:47.367643    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:47.368884    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:47.666503    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:47.810328    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:47.864753    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:47.869526    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:48.167085    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:48.310208    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:48.363803    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:48.366873    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:48.667699    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:48.810757    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:48.868640    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:48.886736    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:49.166544    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:49.309654    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:49.364900    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:49.373731    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:49.666648    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:49.685026    8344 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:49.809957    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:49.863734    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:49.867375    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:50.166246    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:50.185118    8344 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:50.185139    8344 pod_ready.go:81] duration metric: took 12.025727981s waiting for pod "nvidia-device-plugin-daemonset-5g44z" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:50.185160    8344 pod_ready.go:38] duration metric: took 26.113955326s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:37:50.185177    8344 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:37:50.185238    8344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:37:50.202894    8344 api_server.go:72] duration metric: took 56.877965839s to wait for apiserver process to appear ...
	I1205 19:37:50.202969    8344 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:37:50.203021    8344 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:37:50.216050    8344 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 19:37:50.217621    8344 api_server.go:141] control plane version: v1.28.4
	I1205 19:37:50.217643    8344 api_server.go:131] duration metric: took 14.63437ms to wait for apiserver health ...
	I1205 19:37:50.217652    8344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:37:50.227050    8344 system_pods.go:59] 18 kube-system pods found
	I1205 19:37:50.227083    8344 system_pods.go:61] "coredns-5dd5756b68-rmhkn" [04289914-4790-4f6d-9b26-c32e7df62269] Running
	I1205 19:37:50.227093    8344 system_pods.go:61] "csi-hostpath-attacher-0" [c447d03a-fc55-4a98-ab99-6bdc4c9ee7a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 19:37:50.227099    8344 system_pods.go:61] "csi-hostpath-resizer-0" [5f3d490d-5ef1-4df9-9bb4-2d88aafec0e5] Running
	I1205 19:37:50.227109    8344 system_pods.go:61] "csi-hostpathplugin-bblgk" [a46c5bbd-7a88-4a8a-8cd2-e38f0a86ef43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:37:50.227116    8344 system_pods.go:61] "etcd-addons-753790" [cc672098-b116-421c-85c9-a0782494ac32] Running
	I1205 19:37:50.227128    8344 system_pods.go:61] "kindnet-j7sxw" [6767c908-4d95-48fe-8cad-132009ede731] Running
	I1205 19:37:50.227139    8344 system_pods.go:61] "kube-apiserver-addons-753790" [c3332431-7e6b-4d8e-ab6d-39e60810e4d0] Running
	I1205 19:37:50.227144    8344 system_pods.go:61] "kube-controller-manager-addons-753790" [88ee7e00-f689-4987-bc56-0a61aa738872] Running
	I1205 19:37:50.227151    8344 system_pods.go:61] "kube-ingress-dns-minikube" [4bbdda14-9e6c-48ab-bdaa-32bfcebc5fe8] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 19:37:50.227160    8344 system_pods.go:61] "kube-proxy-8xqms" [950dcb1d-2f3f-474e-a825-0c79deff5993] Running
	I1205 19:37:50.227166    8344 system_pods.go:61] "kube-scheduler-addons-753790" [20ce1ad9-7803-437e-bce6-657460ce774f] Running
	I1205 19:37:50.227171    8344 system_pods.go:61] "metrics-server-7c66d45ddc-5nn9m" [dfdc10e3-f82d-4c2f-b28e-d02c4992cbd7] Running
	I1205 19:37:50.227177    8344 system_pods.go:61] "nvidia-device-plugin-daemonset-5g44z" [e67179c1-2a66-42ab-af09-92698daea73e] Running
	I1205 19:37:50.227184    8344 system_pods.go:61] "registry-j6vr2" [2025c2db-46b4-422f-bf24-e183c416a7ae] Running
	I1205 19:37:50.227191    8344 system_pods.go:61] "registry-proxy-6gp6x" [a29e840a-e254-486b-98ae-b646b95120f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 19:37:50.227197    8344 system_pods.go:61] "snapshot-controller-58dbcc7b99-p95jm" [7b00d0fd-5b00-4d3c-bce0-60cb2b9328c6] Running
	I1205 19:37:50.227203    8344 system_pods.go:61] "snapshot-controller-58dbcc7b99-zs27h" [f46b8fba-4b6e-471e-95ee-7639a87beca6] Running
	I1205 19:37:50.227211    8344 system_pods.go:61] "storage-provisioner" [74b4f959-2938-46db-a04a-6cbe38891fab] Running
	I1205 19:37:50.227217    8344 system_pods.go:74] duration metric: took 9.55959ms to wait for pod list to return data ...
	I1205 19:37:50.227226    8344 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:37:50.232999    8344 default_sa.go:45] found service account: "default"
	I1205 19:37:50.233024    8344 default_sa.go:55] duration metric: took 5.790905ms for default service account to be created ...
	I1205 19:37:50.233035    8344 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:37:50.246183    8344 system_pods.go:86] 18 kube-system pods found
	I1205 19:37:50.246212    8344 system_pods.go:89] "coredns-5dd5756b68-rmhkn" [04289914-4790-4f6d-9b26-c32e7df62269] Running
	I1205 19:37:50.246222    8344 system_pods.go:89] "csi-hostpath-attacher-0" [c447d03a-fc55-4a98-ab99-6bdc4c9ee7a3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 19:37:50.246228    8344 system_pods.go:89] "csi-hostpath-resizer-0" [5f3d490d-5ef1-4df9-9bb4-2d88aafec0e5] Running
	I1205 19:37:50.246258    8344 system_pods.go:89] "csi-hostpathplugin-bblgk" [a46c5bbd-7a88-4a8a-8cd2-e38f0a86ef43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:37:50.246270    8344 system_pods.go:89] "etcd-addons-753790" [cc672098-b116-421c-85c9-a0782494ac32] Running
	I1205 19:37:50.246276    8344 system_pods.go:89] "kindnet-j7sxw" [6767c908-4d95-48fe-8cad-132009ede731] Running
	I1205 19:37:50.246281    8344 system_pods.go:89] "kube-apiserver-addons-753790" [c3332431-7e6b-4d8e-ab6d-39e60810e4d0] Running
	I1205 19:37:50.246286    8344 system_pods.go:89] "kube-controller-manager-addons-753790" [88ee7e00-f689-4987-bc56-0a61aa738872] Running
	I1205 19:37:50.246300    8344 system_pods.go:89] "kube-ingress-dns-minikube" [4bbdda14-9e6c-48ab-bdaa-32bfcebc5fe8] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1205 19:37:50.246306    8344 system_pods.go:89] "kube-proxy-8xqms" [950dcb1d-2f3f-474e-a825-0c79deff5993] Running
	I1205 19:37:50.246314    8344 system_pods.go:89] "kube-scheduler-addons-753790" [20ce1ad9-7803-437e-bce6-657460ce774f] Running
	I1205 19:37:50.246334    8344 system_pods.go:89] "metrics-server-7c66d45ddc-5nn9m" [dfdc10e3-f82d-4c2f-b28e-d02c4992cbd7] Running
	I1205 19:37:50.246349    8344 system_pods.go:89] "nvidia-device-plugin-daemonset-5g44z" [e67179c1-2a66-42ab-af09-92698daea73e] Running
	I1205 19:37:50.246354    8344 system_pods.go:89] "registry-j6vr2" [2025c2db-46b4-422f-bf24-e183c416a7ae] Running
	I1205 19:37:50.246362    8344 system_pods.go:89] "registry-proxy-6gp6x" [a29e840a-e254-486b-98ae-b646b95120f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 19:37:50.246370    8344 system_pods.go:89] "snapshot-controller-58dbcc7b99-p95jm" [7b00d0fd-5b00-4d3c-bce0-60cb2b9328c6] Running
	I1205 19:37:50.246376    8344 system_pods.go:89] "snapshot-controller-58dbcc7b99-zs27h" [f46b8fba-4b6e-471e-95ee-7639a87beca6] Running
	I1205 19:37:50.246380    8344 system_pods.go:89] "storage-provisioner" [74b4f959-2938-46db-a04a-6cbe38891fab] Running
	I1205 19:37:50.246389    8344 system_pods.go:126] duration metric: took 13.347156ms to wait for k8s-apps to be running ...
	I1205 19:37:50.246399    8344 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:37:50.246452    8344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:37:50.265730    8344 system_svc.go:56] duration metric: took 19.321388ms WaitForService to wait for kubelet.
	I1205 19:37:50.265757    8344 kubeadm.go:581] duration metric: took 56.940835101s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 19:37:50.265783    8344 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:37:50.272402    8344 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 19:37:50.272430    8344 node_conditions.go:123] node cpu capacity is 2
	I1205 19:37:50.272441    8344 node_conditions.go:105] duration metric: took 6.653344ms to run NodePressure ...
	I1205 19:37:50.272452    8344 start.go:228] waiting for startup goroutines ...
	I1205 19:37:50.310447    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:50.366196    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:50.373243    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:50.668018    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:50.809704    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:50.865242    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:50.874154    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:51.166804    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:51.313332    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:51.373433    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:51.386648    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:51.673329    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:51.814696    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:51.865990    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:51.872141    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:52.168928    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:52.309605    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:52.365134    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:52.368774    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:52.666539    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:52.809871    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:52.864301    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:52.870903    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:53.167481    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:53.310007    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:53.363856    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:53.367400    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:53.667209    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:53.809743    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:53.863718    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:53.867476    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:54.170780    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:54.310872    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:54.369032    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:54.375576    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:54.666623    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:54.813837    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:54.865263    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:54.869868    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:55.168730    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:55.310215    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:55.363515    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:55.367196    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:55.666885    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:55.810619    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:55.896162    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:55.897134    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:56.172316    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:56.311112    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:56.366210    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:56.369572    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:56.668951    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:56.811863    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:56.863996    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:56.867057    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:57.166754    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:57.310269    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:57.363496    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:57.367164    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:57.665948    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:57.809494    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:57.863736    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:57.867394    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:58.166937    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:58.310427    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:58.366127    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:58.369849    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:58.666691    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:58.810951    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:58.866253    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:58.868737    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:59.167888    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:59.309445    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:59.363852    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:59.366997    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:59.667449    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:59.810236    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:59.866400    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:59.870827    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:38:00.168789    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:00.310208    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:00.367215    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:00.370888    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:38:00.670052    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:00.812699    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:00.870175    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:00.875138    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:38:01.167676    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:01.309816    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:01.365546    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:01.369043    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:38:01.666610    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:01.810638    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:01.863308    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:01.867229    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:38:02.167233    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:02.309600    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:02.364390    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:02.367171    8344 kapi.go:107] duration metric: took 1m3.525198287s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 19:38:02.666814    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:02.813224    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:02.863930    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:03.166548    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:03.309862    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:03.363647    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:03.667310    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:03.809997    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:03.864074    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:04.167632    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:04.310453    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:04.364371    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:04.666593    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:04.810039    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:38:04.863911    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:05.166121    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:05.309838    8344 kapi.go:107] duration metric: took 1m3.033731814s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 19:38:05.312038    8344 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-753790 cluster.
	I1205 19:38:05.314983    8344 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 19:38:05.317143    8344 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 19:38:05.364348    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:05.667696    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:05.864385    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:06.167773    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:06.365953    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:06.667802    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:06.864969    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:07.166988    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:07.363484    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:07.666944    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:07.864879    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:08.166735    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:08.363893    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:08.666791    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:08.864193    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:09.166282    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:09.364667    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:09.666124    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:09.864137    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:10.166751    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:10.363975    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:10.669369    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:10.865070    8344 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:38:11.168351    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:11.387086    8344 kapi.go:107] duration metric: took 1m12.547105276s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 19:38:11.667067    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:12.168174    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:12.667029    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:13.190404    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:13.667246    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:14.166403    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:14.666780    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:15.166419    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:15.666278    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:16.167034    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:16.669865    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:17.167223    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:17.667281    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:18.166813    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:18.671659    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:19.167090    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:19.666378    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:20.166473    8344 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:38:20.666318    8344 kapi.go:107] duration metric: took 1m21.525073367s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 19:38:20.668810    8344 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, default-storageclass, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, metrics-server, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1205 19:38:20.670687    8344 addons.go:502] enable addons completed in 1m27.65390539s: enabled=[cloud-spanner nvidia-device-plugin ingress-dns default-storageclass storage-provisioner storage-provisioner-rancher inspektor-gadget metrics-server volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1205 19:38:20.670727    8344 start.go:233] waiting for cluster config update ...
	I1205 19:38:20.670759    8344 start.go:242] writing updated cluster config ...
	I1205 19:38:20.671066    8344 ssh_runner.go:195] Run: rm -f paused
	I1205 19:38:21.009787    8344 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 19:38:21.012604    8344 out.go:177] * Done! kubectl is now configured to use "addons-753790" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 05 19:47:27 addons-753790 crio[893]: time="2023-12-05 19:47:27.804922218Z" level=info msg="Creating container: gadget/gadget-qxcgc/gadget" id=a51091e4-9f7d-4080-bc10-a6498a7b6b69 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:47:27 addons-753790 crio[893]: time="2023-12-05 19:47:27.805009759Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 19:47:27 addons-753790 conmon[8708]: conmon 15276b6404a374962a6b <nwarn>: runtime stderr: time="2023-12-05T19:47:27Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	                                            time="2023-12-05T19:47:27Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	                                            time="2023-12-05T19:47:27Z" level=warning msg="lstat : no such file or directory"
	                                            time="2023-12-05T19:47:27Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:47:27 addons-753790 conmon[8708]: conmon 15276b6404a374962a6b <error>: Failed to create container: exit status 1
	Dec 05 19:47:27 addons-753790 crio[893]: time="2023-12-05 19:47:27.885507642Z" level=error msg="Container creation error: time=\"2023-12-05T19:47:27Z\" level=warning msg=\"cannot toggle freezer: cgroups not configured for container\"\ntime=\"2023-12-05T19:47:27Z\" level=warning msg=\"cannot toggle freezer: cgroups not configured for container\"\ntime=\"2023-12-05T19:47:27Z\" level=warning msg=\"lstat : no such file or directory\"\ntime=\"2023-12-05T19:47:27Z\" level=error msg=\"container_linux.go:380: starting container process caused: exec: \\\"/entrypoint.sh\\\": stat /entrypoint.sh: no such file or directory\"\n" id=a51091e4-9f7d-4080-bc10-a6498a7b6b69 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:47:27 addons-753790 crio[893]: time="2023-12-05 19:47:27.894309109Z" level=info msg="createCtr: deleting container ID 15276b6404a374962a6b45928d82217ec79a129e3b4464e0177125eadcf8356d from idIndex" id=a51091e4-9f7d-4080-bc10-a6498a7b6b69 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:47:27 addons-753790 crio[893]: time="2023-12-05 19:47:27.894358004Z" level=info msg="createCtr: deleting container ID 15276b6404a374962a6b45928d82217ec79a129e3b4464e0177125eadcf8356d from idIndex" id=a51091e4-9f7d-4080-bc10-a6498a7b6b69 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:47:27 addons-753790 crio[893]: time="2023-12-05 19:47:27.895031651Z" level=info msg="createCtr: deleting container ID 15276b6404a374962a6b45928d82217ec79a129e3b4464e0177125eadcf8356d from idIndex" id=a51091e4-9f7d-4080-bc10-a6498a7b6b69 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:47:27 addons-753790 crio[893]: time="2023-12-05 19:47:27.903199693Z" level=info msg="createCtr: deleting container ID 15276b6404a374962a6b45928d82217ec79a129e3b4464e0177125eadcf8356d from idIndex" id=a51091e4-9f7d-4080-bc10-a6498a7b6b69 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.489541482Z" level=info msg="Checking image status: ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" id=036d6c42-3d69-4631-9658-ff2819a9d231 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.489803031Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:45e33ff5627bef80cc4abebf01df370198c2f8e21477685063cd5dd2a33b648c,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:4decec48d0f1fdd5d28e85b558eddef3ba91bbf7ebc7f43b5ec6a86b210a78c9 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931],Size_:248786914,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=036d6c42-3d69-4631-9658-ff2819a9d231 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.490616191Z" level=info msg="Pulling image: ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" id=a586e860-9ad5-41d5-9dcd-2435b66cb8dc name=/runtime.v1.ImageService/PullImage
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.492514620Z" level=info msg="Trying to access \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931\""
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.801067246Z" level=info msg="Pulled image: ghcr.io/inspektor-gadget/inspektor-gadget@sha256:4decec48d0f1fdd5d28e85b558eddef3ba91bbf7ebc7f43b5ec6a86b210a78c9" id=a586e860-9ad5-41d5-9dcd-2435b66cb8dc name=/runtime.v1.ImageService/PullImage
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.801900854Z" level=info msg="Checking image status: ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" id=55f13ef8-0324-4dff-aea1-fff132c386a3 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.802107166Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:45e33ff5627bef80cc4abebf01df370198c2f8e21477685063cd5dd2a33b648c,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:4decec48d0f1fdd5d28e85b558eddef3ba91bbf7ebc7f43b5ec6a86b210a78c9 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931],Size_:248786914,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=55f13ef8-0324-4dff-aea1-fff132c386a3 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.803055203Z" level=info msg="Creating container: gadget/gadget-qxcgc/gadget" id=b3665fa5-477d-4779-be5d-effe752c4b93 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.803134326Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 19:47:40 addons-753790 conmon[8735]: conmon d3176d93d1c613ecaa38 <nwarn>: runtime stderr: time="2023-12-05T19:47:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	                                            time="2023-12-05T19:47:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	                                            time="2023-12-05T19:47:40Z" level=warning msg="lstat : no such file or directory"
	                                            time="2023-12-05T19:47:40Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:47:40 addons-753790 conmon[8735]: conmon d3176d93d1c613ecaa38 <error>: Failed to create container: exit status 1
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.874147487Z" level=error msg="Container creation error: time=\"2023-12-05T19:47:40Z\" level=warning msg=\"cannot toggle freezer: cgroups not configured for container\"\ntime=\"2023-12-05T19:47:40Z\" level=warning msg=\"cannot toggle freezer: cgroups not configured for container\"\ntime=\"2023-12-05T19:47:40Z\" level=warning msg=\"lstat : no such file or directory\"\ntime=\"2023-12-05T19:47:40Z\" level=error msg=\"container_linux.go:380: starting container process caused: exec: \\\"/entrypoint.sh\\\": stat /entrypoint.sh: no such file or directory\"\n" id=b3665fa5-477d-4779-be5d-effe752c4b93 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.882281461Z" level=info msg="createCtr: deleting container ID d3176d93d1c613ecaa3828daf1ca9f9b82a9c9d32dc700674be8ffdd4ba92329 from idIndex" id=b3665fa5-477d-4779-be5d-effe752c4b93 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.882336493Z" level=info msg="createCtr: deleting container ID d3176d93d1c613ecaa3828daf1ca9f9b82a9c9d32dc700674be8ffdd4ba92329 from idIndex" id=b3665fa5-477d-4779-be5d-effe752c4b93 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.882354585Z" level=info msg="createCtr: deleting container ID d3176d93d1c613ecaa3828daf1ca9f9b82a9c9d32dc700674be8ffdd4ba92329 from idIndex" id=b3665fa5-477d-4779-be5d-effe752c4b93 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 19:47:40 addons-753790 crio[893]: time="2023-12-05 19:47:40.890344935Z" level=info msg="createCtr: deleting container ID d3176d93d1c613ecaa3828daf1ca9f9b82a9c9d32dc700674be8ffdd4ba92329 from idIndex" id=b3665fa5-477d-4779-be5d-effe752c4b93 name=/runtime.v1.RuntimeService/CreateContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	61c78a979e048       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                               2 minutes ago       Exited              hello-world-app           5                   7a9f75b53ae70       hello-world-app-5d77478584-hfd2s
	248179e129740       docker.io/library/nginx@sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7                7 minutes ago       Running             nginx                     0                   0dfb8124e8975       nginx
	e9bc574b338f9       ghcr.io/headlamp-k8s/headlamp@sha256:7a9587036bd29304f8f1387a7245556a3c479434670b2ca58e3624d44d2a68c9          8 minutes ago       Running             headlamp                  0                   df30d83a277a6       headlamp-777fd4b855-4wt8j
	1d3df6e6d00dc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa   9 minutes ago       Running             gcp-auth                  0                   d436ce15d9b7b       gcp-auth-d4c87556c-hzq5m
	d3ac64f27fd20       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                               10 minutes ago      Running             storage-provisioner       0                   e2d165cc33a69       storage-provisioner
	5490b1908a513       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                               10 minutes ago      Running             coredns                   0                   0a827e54023ce       coredns-5dd5756b68-rmhkn
	42b1944b80035       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                               10 minutes ago      Running             kube-proxy                0                   ea1a9cfa74b27       kube-proxy-8xqms
	fed428c064458       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                               10 minutes ago      Running             kindnet-cni               0                   c8e6f8c914957       kindnet-j7sxw
	f402a5f264d2f       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                               11 minutes ago      Running             kube-controller-manager   0                   3536c8b40fb94       kube-controller-manager-addons-753790
	940f8074d6bd5       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                               11 minutes ago      Running             kube-apiserver            0                   e378c24b59eda       kube-apiserver-addons-753790
	49e03b8e4b31d       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                               11 minutes ago      Running             kube-scheduler            0                   d0118c0a56c79       kube-scheduler-addons-753790
	efad096daa660       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                               11 minutes ago      Running             etcd                      0                   a9011d0d4417c       etcd-addons-753790
	
	* 
	* ==> coredns [5490b1908a51341623358c1eb0b51c35ee5b88da19aaf50b3eaa21aacacae120] <==
	* [INFO] 10.244.0.18:36855 - 59428 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047565s
	[INFO] 10.244.0.18:36855 - 62673 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076226s
	[INFO] 10.244.0.18:36855 - 31863 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052957s
	[INFO] 10.244.0.18:36855 - 49236 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065633s
	[INFO] 10.244.0.18:36855 - 7934 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001108703s
	[INFO] 10.244.0.18:36855 - 7223 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000957055s
	[INFO] 10.244.0.18:36855 - 25649 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049699s
	[INFO] 10.244.0.18:45570 - 64233 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000103763s
	[INFO] 10.244.0.18:33219 - 36579 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000111124s
	[INFO] 10.244.0.18:45570 - 33825 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098561s
	[INFO] 10.244.0.18:33219 - 39345 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000070098s
	[INFO] 10.244.0.18:45570 - 56380 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000102458s
	[INFO] 10.244.0.18:45570 - 10064 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060644s
	[INFO] 10.244.0.18:45570 - 32758 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065363s
	[INFO] 10.244.0.18:45570 - 51718 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051733s
	[INFO] 10.244.0.18:45570 - 26821 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001469502s
	[INFO] 10.244.0.18:33219 - 4631 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000152953s
	[INFO] 10.244.0.18:33219 - 7117 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000632908s
	[INFO] 10.244.0.18:45570 - 5975 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001651075s
	[INFO] 10.244.0.18:33219 - 37753 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000115972s
	[INFO] 10.244.0.18:45570 - 13725 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061777s
	[INFO] 10.244.0.18:33219 - 62665 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037424s
	[INFO] 10.244.0.18:33219 - 5809 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00109793s
	[INFO] 10.244.0.18:33219 - 28366 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002802067s
	[INFO] 10.244.0.18:33219 - 51221 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061031s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-753790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-753790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=addons-753790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T19_36_41_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-753790
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 19:36:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-753790
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 19:47:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 19:47:24 +0000   Tue, 05 Dec 2023 19:36:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 19:47:24 +0000   Tue, 05 Dec 2023 19:36:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 19:47:24 +0000   Tue, 05 Dec 2023 19:36:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 19:47:24 +0000   Tue, 05 Dec 2023 19:37:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-753790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 10f3442ae5bd4cb6ac1d005edf4e5579
	  System UUID:                a984fe80-b922-46c4-acc7-231aa98aa32e
	  Boot ID:                    ade55ee8-b6ef-4756-8af5-2453aa07c908
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-hfd2s         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  gadget                      gadget-qxcgc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  gcp-auth                    gcp-auth-d4c87556c-hzq5m                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  headlamp                    headlamp-777fd4b855-4wt8j                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 coredns-5dd5756b68-rmhkn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 etcd-addons-753790                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-j7sxw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-addons-753790             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-addons-753790    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-8xqms                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-addons-753790             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-753790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-753790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node addons-753790 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node addons-753790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node addons-753790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node addons-753790 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node addons-753790 event: Registered Node addons-753790 in Controller
	  Normal  NodeReady                10m                kubelet          Node addons-753790 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Dec 5 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015635] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.321413] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.302274] kauditd_printk_skb: 26 callbacks suppressed
	
	* 
	* ==> etcd [efad096daa6601649f8ea74e53d8bbd7484d55852d7c430fbc34eda28bc180a3] <==
	* {"level":"info","ts":"2023-12-05T19:36:34.030666Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-05T19:36:34.035983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-12-05T19:36:34.036164Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-05T19:36:34.975795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-05T19:36:34.975911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-05T19:36:34.975951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-12-05T19:36:34.975996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-12-05T19:36:34.976027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-05T19:36:34.976065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-12-05T19:36:34.9761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-05T19:36:34.979867Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:36:34.98393Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-753790 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-05T19:36:34.987812Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:36:34.987923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:36:34.987973Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T19:36:34.988009Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T19:36:34.98901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-05T19:36:34.989097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T19:36:34.991816Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-05T19:36:34.991893Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-05T19:36:34.992654Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-05T19:36:56.110304Z","caller":"traceutil/trace.go:171","msg":"trace[257927791] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"113.16543ms","start":"2023-12-05T19:36:55.997124Z","end":"2023-12-05T19:36:56.110289Z","steps":["trace[257927791] 'process raft request'  (duration: 51.277794ms)","trace[257927791] 'compare'  (duration: 61.821781ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T19:46:35.152231Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1845}
	{"level":"info","ts":"2023-12-05T19:46:35.182903Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1845,"took":"30.01674ms","hash":4262626028}
	{"level":"info","ts":"2023-12-05T19:46:35.182957Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4262626028,"revision":1845,"compact-revision":-1}
	
	* 
	* ==> gcp-auth [1d3df6e6d00dc10181a118ee13946f20fe47e312f4b1ffb5871be1445f12b7fa] <==
	* 2023/12/05 19:38:04 GCP Auth Webhook started!
	2023/12/05 19:38:31 Ready to marshal response ...
	2023/12/05 19:38:31 Ready to write response ...
	2023/12/05 19:38:42 Ready to marshal response ...
	2023/12/05 19:38:42 Ready to write response ...
	2023/12/05 19:38:42 Ready to marshal response ...
	2023/12/05 19:38:42 Ready to write response ...
	2023/12/05 19:38:50 Ready to marshal response ...
	2023/12/05 19:38:50 Ready to write response ...
	2023/12/05 19:38:57 Ready to marshal response ...
	2023/12/05 19:38:57 Ready to write response ...
	2023/12/05 19:39:14 Ready to marshal response ...
	2023/12/05 19:39:14 Ready to write response ...
	2023/12/05 19:39:35 Ready to marshal response ...
	2023/12/05 19:39:35 Ready to write response ...
	2023/12/05 19:39:35 Ready to marshal response ...
	2023/12/05 19:39:35 Ready to write response ...
	2023/12/05 19:39:35 Ready to marshal response ...
	2023/12/05 19:39:35 Ready to write response ...
	2023/12/05 19:39:46 Ready to marshal response ...
	2023/12/05 19:39:46 Ready to write response ...
	2023/12/05 19:42:05 Ready to marshal response ...
	2023/12/05 19:42:05 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:47:44 up 30 min,  0 users,  load average: 0.08, 0.31, 0.31
	Linux addons-753790 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [fed428c0644589df3c411742042bdbbd3affc28eeaf51b382ea5b1dda67305a3] <==
	* I1205 19:45:43.825541       1 main.go:227] handling current node
	I1205 19:45:53.828996       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:45:53.829027       1 main.go:227] handling current node
	I1205 19:46:03.840636       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:46:03.840660       1 main.go:227] handling current node
	I1205 19:46:13.854436       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:46:13.854468       1 main.go:227] handling current node
	I1205 19:46:23.863605       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:46:23.863629       1 main.go:227] handling current node
	I1205 19:46:33.867527       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:46:33.867556       1 main.go:227] handling current node
	I1205 19:46:43.879119       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:46:43.879144       1 main.go:227] handling current node
	I1205 19:46:53.883670       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:46:53.883696       1 main.go:227] handling current node
	I1205 19:47:03.887807       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:47:03.887829       1 main.go:227] handling current node
	I1205 19:47:13.891805       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:47:13.891827       1 main.go:227] handling current node
	I1205 19:47:23.903312       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:47:23.903339       1 main.go:227] handling current node
	I1205 19:47:33.917780       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:47:33.917888       1 main.go:227] handling current node
	I1205 19:47:43.922232       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:47:43.922854       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [940f8074d6bd526a437a01a29138b4d811e400064a5968a96703989071fc2704] <==
	* E1205 19:39:06.532374       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1205 19:39:08.450318       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1205 19:39:30.366542       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:39:30.366651       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:39:30.382653       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:39:30.382705       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:39:30.403337       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:39:30.403460       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:39:30.447864       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:39:30.448013       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:39:30.500272       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:39:30.500407       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:39:30.527851       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:39:30.527892       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 19:39:31.403714       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 19:39:31.528107       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 19:39:31.534577       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1205 19:39:35.228412       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.246.102"}
	I1205 19:39:37.243775       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1205 19:39:46.091177       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 19:39:46.389307       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.176.121"}
	I1205 19:40:38.964481       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1205 19:41:37.602032       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1205 19:42:06.031352       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.177.56"}
	I1205 19:46:37.602093       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [f402a5f264d2f85c867a58fbc63ef3df203f1c3e2c8361c8379ee70c9ce2d383] <==
	* E1205 19:44:54.928362       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1205 19:45:09.770990       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="67.816µs"
	W1205 19:45:15.357666       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:45:15.357701       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:45:22.412338       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:45:22.412369       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1205 19:45:25.501225       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="67.299µs"
	W1205 19:45:43.061042       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:45:43.061074       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:46:07.482465       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:46:07.482494       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:46:08.636986       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:46:08.637018       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:46:36.193395       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:46:36.193426       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:46:53.450363       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:46:53.450393       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:47:00.165855       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:47:00.165892       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:47:08.102118       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:47:08.102150       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:47:41.575581       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:47:41.575613       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:47:42.117962       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:47:42.117995       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [42b1944b800350c918edede48d949a74a384517b604dec31f44caab9433173b6] <==
	* I1205 19:36:54.004249       1 server_others.go:69] "Using iptables proxy"
	I1205 19:36:56.521209       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1205 19:36:58.561313       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 19:36:58.565084       1 server_others.go:152] "Using iptables Proxier"
	I1205 19:36:58.565167       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1205 19:36:58.565200       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1205 19:36:58.565301       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 19:36:58.565568       1 server.go:846] "Version info" version="v1.28.4"
	I1205 19:36:58.565735       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:36:58.567018       1 config.go:188] "Starting service config controller"
	I1205 19:36:58.567125       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 19:36:58.567171       1 config.go:97] "Starting endpoint slice config controller"
	I1205 19:36:58.567199       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 19:36:58.567722       1 config.go:315] "Starting node config controller"
	I1205 19:36:58.569872       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 19:36:58.668484       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 19:36:58.668737       1 shared_informer.go:318] Caches are synced for service config
	I1205 19:36:58.670896       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [49e03b8e4b31d4071c934d34366e1605b553bf107e4a169c473753f4b5868652] <==
	* W1205 19:36:37.640539       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:36:37.640571       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1205 19:36:37.647497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:36:37.647538       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 19:36:37.647625       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:36:37.647648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 19:36:37.647736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 19:36:37.647767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1205 19:36:37.647739       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:36:37.647793       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 19:36:37.647849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 19:36:37.647864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1205 19:36:37.647909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:36:37.647957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1205 19:36:37.647929       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:36:37.648026       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1205 19:36:37.647999       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 19:36:37.648096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1205 19:36:37.648059       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:36:37.648182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 19:36:37.655998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:36:37.656037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 19:36:38.517544       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:36:38.517675       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1205 19:36:41.023836       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 05 19:47:27 addons-753790 kubelet[1361]:         time="2023-12-05T19:47:27Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:47:27 addons-753790 kubelet[1361]:         time="2023-12-05T19:47:27Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:47:27 addons-753790 kubelet[1361]:         time="2023-12-05T19:47:27Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:47:27 addons-753790 kubelet[1361]:  > podSandboxID="1a9ede046909add9684135c149ff559d1545071f5ebb837fc96c544250cc557d"
	Dec 05 19:47:27 addons-753790 kubelet[1361]: E1205 19:47:27.903688    1361 kuberuntime_manager.go:1261] container &Container{Name:gadget,Image:ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931,Command:[/entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_POD_UID,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.uid,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVers
ion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_IMAGE,Value:ghcr.io/inspektor-gadget/inspektor-gadget,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_VERSION,Value:v0.16.1,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_HOOK_MODE,Value:auto,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_FALLBACK_POD_INFORMER,Value:true,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CONTAINERD_SOCKETPATH,Value:/run/containerd/containerd.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CRIO_SOCKETPATH,Value:/run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_DOCKER_SOCKETPATH,Value:/run/docker.sock,ValueFrom:nil,},EnvVar{Name:HOST_ROOT,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Clai
ms:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:modules,ReadOnly:false,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:debugfs,ReadOnly:false,MountPath:/sys/kernel/debug,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cgroup,ReadOnly:false,MountPath:/sys/fs/cgroup,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bpffs,ReadOnly:false,MountPath:/sys/fs/bpf,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-snrgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,Pe
riodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYSLOG SYS_PTRACE SYS_RESOURCE IPC_LOCK SYS_MODULE NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod gadget-qxcgc_gadget(97bec43a-0805-4763-9862-53819201c4e8): CreateContainerError: container create failed: time="2023-12-05T19:47:27Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:47:27 addons-753790 kubelet[1361]: time="2023-12-05T19:47:27Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:47:27 addons-753790 kubelet[1361]: time="2023-12-05T19:47:27Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:47:27 addons-753790 kubelet[1361]: time="2023-12-05T19:47:27Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:47:27 addons-753790 kubelet[1361]: E1205 19:47:27.903744    1361 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CreateContainerError: \"container create failed: time=\\\"2023-12-05T19:47:27Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:47:27Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:47:27Z\\\" level=warning msg=\\\"lstat : no such file or directory\\\"\\ntime=\\\"2023-12-05T19:47:27Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: exec: \\\\\\\"/entrypoint.sh\\\\\\\": stat /entrypoint.sh: no such file or directory\\\"\\n\"" pod="gadget/gadget-qxcgc" podUID="97bec43a-0805-4763-9862-53819201c4e8"
	Dec 05 19:47:39 addons-753790 kubelet[1361]: I1205 19:47:39.489145    1361 scope.go:117] "RemoveContainer" containerID="61c78a979e0482fbdb5d53e3374fed990af7900c2865a7bbab6df5025d62c214"
	Dec 05 19:47:39 addons-753790 kubelet[1361]: E1205 19:47:39.489461    1361 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-hfd2s_default(259a4cd4-a199-4381-8550-0740f44110f7)\"" pod="default/hello-world-app-5d77478584-hfd2s" podUID="259a4cd4-a199-4381-8550-0740f44110f7"
	Dec 05 19:47:40 addons-753790 kubelet[1361]: E1205 19:47:40.684536    1361 manager.go:1106] Failed to create existing container: /docker/9a7b5170de31ef918d86be75d8cc26debc9bf3dcf4d5952d94980f981fbf56db/crio-00058ef70b00da84164de5be179810dc1ed2243a9717f97c93ebab10a4045748: Error finding container 00058ef70b00da84164de5be179810dc1ed2243a9717f97c93ebab10a4045748: Status 404 returned error can't find the container with id 00058ef70b00da84164de5be179810dc1ed2243a9717f97c93ebab10a4045748
	Dec 05 19:47:40 addons-753790 kubelet[1361]: E1205 19:47:40.685505    1361 manager.go:1106] Failed to create existing container: /crio-00058ef70b00da84164de5be179810dc1ed2243a9717f97c93ebab10a4045748: Error finding container 00058ef70b00da84164de5be179810dc1ed2243a9717f97c93ebab10a4045748: Status 404 returned error can't find the container with id 00058ef70b00da84164de5be179810dc1ed2243a9717f97c93ebab10a4045748
	Dec 05 19:47:40 addons-753790 kubelet[1361]: E1205 19:47:40.721549    1361 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a6a41394f1799d68901b68a4a53f606af8a87aabd9eb8e8bd624bc1ebd802874/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a6a41394f1799d68901b68a4a53f606af8a87aabd9eb8e8bd624bc1ebd802874/diff: no such file or directory, extraDiskErr: <nil>
	Dec 05 19:47:40 addons-753790 kubelet[1361]: E1205 19:47:40.890661    1361 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err=<
	Dec 05 19:47:40 addons-753790 kubelet[1361]:         rpc error: code = Unknown desc = container create failed: time="2023-12-05T19:47:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:47:40 addons-753790 kubelet[1361]:         time="2023-12-05T19:47:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:47:40 addons-753790 kubelet[1361]:         time="2023-12-05T19:47:40Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:47:40 addons-753790 kubelet[1361]:         time="2023-12-05T19:47:40Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:47:40 addons-753790 kubelet[1361]:  > podSandboxID="1a9ede046909add9684135c149ff559d1545071f5ebb837fc96c544250cc557d"
	Dec 05 19:47:40 addons-753790 kubelet[1361]: E1205 19:47:40.890874    1361 kuberuntime_manager.go:1261] container &Container{Name:gadget,Image:ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931,Command:[/entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_POD_UID,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.uid,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVers
ion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_IMAGE,Value:ghcr.io/inspektor-gadget/inspektor-gadget,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_VERSION,Value:v0.16.1,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_HOOK_MODE,Value:auto,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_FALLBACK_POD_INFORMER,Value:true,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CONTAINERD_SOCKETPATH,Value:/run/containerd/containerd.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CRIO_SOCKETPATH,Value:/run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_DOCKER_SOCKETPATH,Value:/run/docker.sock,ValueFrom:nil,},EnvVar{Name:HOST_ROOT,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Clai
ms:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:modules,ReadOnly:false,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:debugfs,ReadOnly:false,MountPath:/sys/kernel/debug,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cgroup,ReadOnly:false,MountPath:/sys/fs/cgroup,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bpffs,ReadOnly:false,MountPath:/sys/fs/bpf,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-snrgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,Pe
riodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYSLOG SYS_PTRACE SYS_RESOURCE IPC_LOCK SYS_MODULE NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod gadget-qxcgc_gadget(97bec43a-0805-4763-9862-53819201c4e8): CreateContainerError: container create failed: time="2023-12-05T19:47:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:47:40 addons-753790 kubelet[1361]: time="2023-12-05T19:47:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:47:40 addons-753790 kubelet[1361]: time="2023-12-05T19:47:40Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:47:40 addons-753790 kubelet[1361]: time="2023-12-05T19:47:40Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:47:40 addons-753790 kubelet[1361]: E1205 19:47:40.890925    1361 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CreateContainerError: \"container create failed: time=\\\"2023-12-05T19:47:40Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:47:40Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:47:40Z\\\" level=warning msg=\\\"lstat : no such file or directory\\\"\\ntime=\\\"2023-12-05T19:47:40Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: exec: \\\\\\\"/entrypoint.sh\\\\\\\": stat /entrypoint.sh: no such file or directory\\\"\\n\"" pod="gadget/gadget-qxcgc" podUID="97bec43a-0805-4763-9862-53819201c4e8"
	
	* 
	* ==> storage-provisioner [d3ac64f27fd207935cf6e7d2b9db91f81624dd57d7d3c03202c425be5c5d0591] <==
	* I1205 19:37:24.831015       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:37:24.847516       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:37:24.847712       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:37:24.855360       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:37:24.855516       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-753790_0e8b6e2e-197e-4d66-a0f4-e0115409849b!
	I1205 19:37:24.855506       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d43edea0-fe69-4b72-abe1-b28a5b73d893", APIVersion:"v1", ResourceVersion:"880", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-753790_0e8b6e2e-197e-4d66-a0f4-e0115409849b became leader
	I1205 19:37:24.958132       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-753790_0e8b6e2e-197e-4d66-a0f4-e0115409849b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-753790 -n addons-753790
helpers_test.go:261: (dbg) Run:  kubectl --context addons-753790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: gadget-qxcgc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/InspektorGadget]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-753790 describe pod gadget-qxcgc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-753790 describe pod gadget-qxcgc: exit status 1 (96.329438ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gadget-qxcgc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-753790 describe pod gadget-qxcgc: exit status 1
--- FAIL: TestAddons/parallel/InspektorGadget (483.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (175.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-867324 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-867324 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.57663448s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-867324 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-867324 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7c70059c-b613-42e4-8c09-421fd5b9aaa8] Pending
helpers_test.go:344: "nginx" [7c70059c-b613-42e4-8c09-421fd5b9aaa8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7c70059c-b613-42e4-8c09-421fd5b9aaa8] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.015621795s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-867324 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1205 19:54:42.965604    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:56:04.886086    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:56:15.520481    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:56:15.525719    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:56:15.535948    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:56:15.556185    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:56:15.596406    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:56:15.676680    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:56:15.837084    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:56:16.157583    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:56:16.798432    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:56:18.078789    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:56:20.639052    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:56:25.759212    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:56:36.000078    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-867324 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.974811338s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-867324 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-867324 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1205 19:56:56.480877    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.009559765s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-867324 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-867324 addons disable ingress-dns --alsologtostderr -v=1: (2.466489905s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-867324 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-867324 addons disable ingress --alsologtostderr -v=1: (7.531185846s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-867324
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-867324:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "943d05e088eb9665ae5404e5d8827e9edc8294f22528763f01fd6358863e6afb",
	        "Created": "2023-12-05T19:52:50.159041357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36630,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-05T19:52:50.476267869Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4e0f3cc6f04c458835e9edb05d52f031520d40521bc3568d81cbb7c06a79ef2",
	        "ResolvConfPath": "/var/lib/docker/containers/943d05e088eb9665ae5404e5d8827e9edc8294f22528763f01fd6358863e6afb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/943d05e088eb9665ae5404e5d8827e9edc8294f22528763f01fd6358863e6afb/hostname",
	        "HostsPath": "/var/lib/docker/containers/943d05e088eb9665ae5404e5d8827e9edc8294f22528763f01fd6358863e6afb/hosts",
	        "LogPath": "/var/lib/docker/containers/943d05e088eb9665ae5404e5d8827e9edc8294f22528763f01fd6358863e6afb/943d05e088eb9665ae5404e5d8827e9edc8294f22528763f01fd6358863e6afb-json.log",
	        "Name": "/ingress-addon-legacy-867324",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-867324:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-867324",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0646f06f1620ea0705530b303cf62c74d7dafce6793a59ba8c099129cb56e33f-init/diff:/var/lib/docker/overlay2/ad36f68c22d2503e0656ab5d87c276f08a38342a08463cd6653b41bc4f40eea5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0646f06f1620ea0705530b303cf62c74d7dafce6793a59ba8c099129cb56e33f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0646f06f1620ea0705530b303cf62c74d7dafce6793a59ba8c099129cb56e33f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0646f06f1620ea0705530b303cf62c74d7dafce6793a59ba8c099129cb56e33f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-867324",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-867324/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-867324",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-867324",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-867324",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5381285eeba5c6f8496f08ef2f9866f78dae7e90fc529b918a7f0828432fe41e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5381285eeba5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-867324": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "943d05e088eb",
	                        "ingress-addon-legacy-867324"
	                    ],
	                    "NetworkID": "63aa3e5cb077ef7d0586bc301c23a47cd4d6789450e562bb5c81e48bcc2d18a2",
	                    "EndpointID": "cc241e0a8d84122cf131c2ca36a887557f22942c5511947630aa90b46964a063",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-867324 -n ingress-addon-legacy-867324
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-867324 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-867324 logs -n 25: (1.34163956s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-025502 image load --daemon                                  | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-025502               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-025502 image ls                                             | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	| image   | functional-025502 image load --daemon                                  | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-025502               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-025502 image ls                                             | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	| image   | functional-025502 image save                                           | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-025502               |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-025502 image rm                                             | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-025502               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-025502 image ls                                             | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	| image   | functional-025502 image load                                           | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-025502 image ls                                             | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	| image   | functional-025502 image save --daemon                                  | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-025502               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-025502                                                      | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	|         | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-025502                                                      | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	|         | image ls --format short                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh     | functional-025502 ssh pgrep                                            | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC |                     |
	|         | buildkitd                                                              |                             |         |         |                     |                     |
	| image   | functional-025502                                                      | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	|         | image ls --format json                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-025502                                                      | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	|         | image ls --format table                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-025502 image build -t                                       | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	|         | localhost/my-image:functional-025502                                   |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image   | functional-025502 image ls                                             | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	| delete  | -p functional-025502                                                   | functional-025502           | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:52 UTC |
	| start   | -p ingress-addon-legacy-867324                                         | ingress-addon-legacy-867324 | jenkins | v1.32.0 | 05 Dec 23 19:52 UTC | 05 Dec 23 19:54 UTC |
	|         | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-867324                                            | ingress-addon-legacy-867324 | jenkins | v1.32.0 | 05 Dec 23 19:54 UTC | 05 Dec 23 19:54 UTC |
	|         | addons enable ingress                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-867324                                            | ingress-addon-legacy-867324 | jenkins | v1.32.0 | 05 Dec 23 19:54 UTC | 05 Dec 23 19:54 UTC |
	|         | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-867324                                            | ingress-addon-legacy-867324 | jenkins | v1.32.0 | 05 Dec 23 19:54 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-867324 ip                                         | ingress-addon-legacy-867324 | jenkins | v1.32.0 | 05 Dec 23 19:56 UTC | 05 Dec 23 19:56 UTC |
	| addons  | ingress-addon-legacy-867324                                            | ingress-addon-legacy-867324 | jenkins | v1.32.0 | 05 Dec 23 19:56 UTC | 05 Dec 23 19:57 UTC |
	|         | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-867324                                            | ingress-addon-legacy-867324 | jenkins | v1.32.0 | 05 Dec 23 19:57 UTC | 05 Dec 23 19:57 UTC |
	|         | addons disable ingress                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:52:23
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:52:23.269864   36163 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:52:23.270083   36163 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:52:23.270109   36163 out.go:309] Setting ErrFile to fd 2...
	I1205 19:52:23.270127   36163 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:52:23.270413   36163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	I1205 19:52:23.270871   36163 out.go:303] Setting JSON to false
	I1205 19:52:23.272130   36163 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2090,"bootTime":1701803854,"procs":464,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 19:52:23.272220   36163 start.go:138] virtualization:  
	I1205 19:52:23.275134   36163 out.go:177] * [ingress-addon-legacy-867324] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1205 19:52:23.278170   36163 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:52:23.280403   36163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:52:23.278260   36163 notify.go:220] Checking for updates...
	I1205 19:52:23.282620   36163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 19:52:23.284394   36163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 19:52:23.286708   36163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 19:52:23.288890   36163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:52:23.291092   36163 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:52:23.313823   36163 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:52:23.313925   36163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:52:23.378817   36163 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-05 19:52:23.369452786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:52:23.378918   36163 docker.go:295] overlay module found
	I1205 19:52:23.381529   36163 out.go:177] * Using the docker driver based on user configuration
	I1205 19:52:23.383864   36163 start.go:298] selected driver: docker
	I1205 19:52:23.383878   36163 start.go:902] validating driver "docker" against <nil>
	I1205 19:52:23.383890   36163 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:52:23.384534   36163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:52:23.459253   36163 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-05 19:52:23.44981856 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:52:23.459404   36163 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:52:23.459635   36163 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:52:23.461734   36163 out.go:177] * Using Docker driver with root privileges
	I1205 19:52:23.463653   36163 cni.go:84] Creating CNI manager for ""
	I1205 19:52:23.463670   36163 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:52:23.463683   36163 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:52:23.463693   36163 start_flags.go:323] config:
	{Name:ingress-addon-legacy-867324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-867324 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:52:23.466573   36163 out.go:177] * Starting control plane node ingress-addon-legacy-867324 in cluster ingress-addon-legacy-867324
	I1205 19:52:23.468259   36163 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:52:23.470146   36163 out.go:177] * Pulling base image ...
	I1205 19:52:23.472073   36163 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1205 19:52:23.472349   36163 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:52:23.491456   36163 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon, skipping pull
	I1205 19:52:23.491477   36163 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in daemon, skipping load
	I1205 19:52:23.672583   36163 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1205 19:52:23.672607   36163 cache.go:56] Caching tarball of preloaded images
	I1205 19:52:23.672757   36163 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1205 19:52:23.674866   36163 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1205 19:52:23.676527   36163 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1205 19:52:24.032593   36163 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1205 19:52:42.272635   36163 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1205 19:52:42.272767   36163 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1205 19:52:43.455419   36163 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1205 19:52:43.455835   36163 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/config.json ...
	I1205 19:52:43.455871   36163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/config.json: {Name:mk6f36bbb03173a702cd4bac56c9e7aecbfcdfb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:52:43.456067   36163 cache.go:194] Successfully downloaded all kic artifacts
	I1205 19:52:43.456111   36163 start.go:365] acquiring machines lock for ingress-addon-legacy-867324: {Name:mk828c7cd2375f2a5b8b25c9ea4800dbea7fd84e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:52:43.456173   36163 start.go:369] acquired machines lock for "ingress-addon-legacy-867324" in 47.311µs
	I1205 19:52:43.456194   36163 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-867324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-867324 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:52:43.456271   36163 start.go:125] createHost starting for "" (driver="docker")
	I1205 19:52:43.458708   36163 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1205 19:52:43.458916   36163 start.go:159] libmachine.API.Create for "ingress-addon-legacy-867324" (driver="docker")
	I1205 19:52:43.458939   36163 client.go:168] LocalClient.Create starting
	I1205 19:52:43.459014   36163 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem
	I1205 19:52:43.459048   36163 main.go:141] libmachine: Decoding PEM data...
	I1205 19:52:43.459068   36163 main.go:141] libmachine: Parsing certificate...
	I1205 19:52:43.459123   36163 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem
	I1205 19:52:43.459148   36163 main.go:141] libmachine: Decoding PEM data...
	I1205 19:52:43.459164   36163 main.go:141] libmachine: Parsing certificate...
	I1205 19:52:43.459502   36163 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-867324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 19:52:43.476108   36163 cli_runner.go:211] docker network inspect ingress-addon-legacy-867324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 19:52:43.476183   36163 network_create.go:281] running [docker network inspect ingress-addon-legacy-867324] to gather additional debugging logs...
	I1205 19:52:43.476202   36163 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-867324
	W1205 19:52:43.492280   36163 cli_runner.go:211] docker network inspect ingress-addon-legacy-867324 returned with exit code 1
	I1205 19:52:43.492314   36163 network_create.go:284] error running [docker network inspect ingress-addon-legacy-867324]: docker network inspect ingress-addon-legacy-867324: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-867324 not found
	I1205 19:52:43.492328   36163 network_create.go:286] output of [docker network inspect ingress-addon-legacy-867324]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-867324 not found
	
	** /stderr **
	I1205 19:52:43.492450   36163 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:52:43.509332   36163 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000e9a0}
	I1205 19:52:43.509370   36163 network_create.go:124] attempt to create docker network ingress-addon-legacy-867324 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1205 19:52:43.509423   36163 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-867324 ingress-addon-legacy-867324
	I1205 19:52:43.579735   36163 network_create.go:108] docker network ingress-addon-legacy-867324 192.168.49.0/24 created
	I1205 19:52:43.579802   36163 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-867324" container
	I1205 19:52:43.579875   36163 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 19:52:43.598042   36163 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-867324 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-867324 --label created_by.minikube.sigs.k8s.io=true
	I1205 19:52:43.615772   36163 oci.go:103] Successfully created a docker volume ingress-addon-legacy-867324
	I1205 19:52:43.615850   36163 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-867324-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-867324 --entrypoint /usr/bin/test -v ingress-addon-legacy-867324:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib
	I1205 19:52:45.143943   36163 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-867324-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-867324 --entrypoint /usr/bin/test -v ingress-addon-legacy-867324:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib: (1.528048446s)
	I1205 19:52:45.143975   36163 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-867324
	I1205 19:52:45.144004   36163 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1205 19:52:45.144023   36163 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 19:52:45.144116   36163 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-867324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 19:52:50.075707   36163 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-867324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir: (4.931549437s)
	I1205 19:52:50.075740   36163 kic.go:203] duration metric: took 4.931714 seconds to extract preloaded images to volume
	W1205 19:52:50.075903   36163 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 19:52:50.076040   36163 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 19:52:50.143191   36163 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-867324 --name ingress-addon-legacy-867324 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-867324 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-867324 --network ingress-addon-legacy-867324 --ip 192.168.49.2 --volume ingress-addon-legacy-867324:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 19:52:50.486696   36163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-867324 --format={{.State.Running}}
	I1205 19:52:50.508071   36163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-867324 --format={{.State.Status}}
	I1205 19:52:50.537799   36163 cli_runner.go:164] Run: docker exec ingress-addon-legacy-867324 stat /var/lib/dpkg/alternatives/iptables
	I1205 19:52:50.638251   36163 oci.go:144] the created container "ingress-addon-legacy-867324" has a running status.
	I1205 19:52:50.638282   36163 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/ingress-addon-legacy-867324/id_rsa...
	I1205 19:52:50.977913   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/ingress-addon-legacy-867324/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1205 19:52:50.977961   36163 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17731-2478/.minikube/machines/ingress-addon-legacy-867324/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 19:52:51.005977   36163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-867324 --format={{.State.Status}}
	I1205 19:52:51.042715   36163 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 19:52:51.042736   36163 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-867324 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 19:52:51.142661   36163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-867324 --format={{.State.Status}}
	I1205 19:52:51.164325   36163 machine.go:88] provisioning docker machine ...
	I1205 19:52:51.164355   36163 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-867324"
	I1205 19:52:51.164421   36163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-867324
	I1205 19:52:51.198694   36163 main.go:141] libmachine: Using SSH client type: native
	I1205 19:52:51.199168   36163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1205 19:52:51.199190   36163 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-867324 && echo "ingress-addon-legacy-867324" | sudo tee /etc/hostname
	I1205 19:52:51.418738   36163 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-867324
	
	I1205 19:52:51.418878   36163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-867324
	I1205 19:52:51.451970   36163 main.go:141] libmachine: Using SSH client type: native
	I1205 19:52:51.452405   36163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1205 19:52:51.452440   36163 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-867324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-867324/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-867324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:52:51.608877   36163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:52:51.608918   36163 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-2478/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-2478/.minikube}
	I1205 19:52:51.608970   36163 ubuntu.go:177] setting up certificates
	I1205 19:52:51.608980   36163 provision.go:83] configureAuth start
	I1205 19:52:51.609060   36163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-867324
	I1205 19:52:51.627101   36163 provision.go:138] copyHostCerts
	I1205 19:52:51.627141   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem
	I1205 19:52:51.627171   36163 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem, removing ...
	I1205 19:52:51.627183   36163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem
	I1205 19:52:51.627248   36163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem (1123 bytes)
	I1205 19:52:51.627318   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem
	I1205 19:52:51.627338   36163 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem, removing ...
	I1205 19:52:51.627344   36163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem
	I1205 19:52:51.627373   36163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem (1679 bytes)
	I1205 19:52:51.627416   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem
	I1205 19:52:51.627436   36163 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem, removing ...
	I1205 19:52:51.627443   36163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem
	I1205 19:52:51.627467   36163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem (1078 bytes)
	I1205 19:52:51.627519   36163 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-867324 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-867324]
	I1205 19:52:51.975868   36163 provision.go:172] copyRemoteCerts
	I1205 19:52:51.975956   36163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:52:51.976000   36163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-867324
	I1205 19:52:51.992995   36163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/ingress-addon-legacy-867324/id_rsa Username:docker}
	I1205 19:52:52.097532   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:52:52.097589   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:52:52.123504   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:52:52.123561   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1205 19:52:52.149611   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:52:52.149666   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:52:52.176040   36163 provision.go:86] duration metric: configureAuth took 567.042726ms
	I1205 19:52:52.176065   36163 ubuntu.go:193] setting minikube options for container-runtime
	I1205 19:52:52.176270   36163 config.go:182] Loaded profile config "ingress-addon-legacy-867324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1205 19:52:52.176375   36163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-867324
	I1205 19:52:52.192761   36163 main.go:141] libmachine: Using SSH client type: native
	I1205 19:52:52.193174   36163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1205 19:52:52.193197   36163 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:52:52.470092   36163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:52:52.470117   36163 machine.go:91] provisioned docker machine in 1.305769394s
	I1205 19:52:52.470126   36163 client.go:171] LocalClient.Create took 9.011177493s
	I1205 19:52:52.470146   36163 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-867324" took 9.011228726s
	I1205 19:52:52.470155   36163 start.go:300] post-start starting for "ingress-addon-legacy-867324" (driver="docker")
	I1205 19:52:52.470165   36163 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:52:52.470226   36163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:52:52.470269   36163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-867324
	I1205 19:52:52.489591   36163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/ingress-addon-legacy-867324/id_rsa Username:docker}
	I1205 19:52:52.593818   36163 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:52:52.597556   36163 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 19:52:52.597597   36163 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 19:52:52.597627   36163 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 19:52:52.597641   36163 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1205 19:52:52.597652   36163 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/addons for local assets ...
	I1205 19:52:52.597716   36163 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/files for local assets ...
	I1205 19:52:52.597802   36163 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> 77732.pem in /etc/ssl/certs
	I1205 19:52:52.597813   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> /etc/ssl/certs/77732.pem
	I1205 19:52:52.597917   36163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:52:52.607034   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem --> /etc/ssl/certs/77732.pem (1708 bytes)
	I1205 19:52:52.632268   36163 start.go:303] post-start completed in 162.098735ms
	I1205 19:52:52.632633   36163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-867324
	I1205 19:52:52.652462   36163 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/config.json ...
	I1205 19:52:52.652721   36163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:52:52.652769   36163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-867324
	I1205 19:52:52.669318   36163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/ingress-addon-legacy-867324/id_rsa Username:docker}
	I1205 19:52:52.773428   36163 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 19:52:52.778507   36163 start.go:128] duration metric: createHost completed in 9.322223103s
	I1205 19:52:52.778527   36163 start.go:83] releasing machines lock for "ingress-addon-legacy-867324", held for 9.32234216s
	I1205 19:52:52.778591   36163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-867324
	I1205 19:52:52.795016   36163 ssh_runner.go:195] Run: cat /version.json
	I1205 19:52:52.795032   36163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:52:52.795075   36163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-867324
	I1205 19:52:52.795090   36163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-867324
	I1205 19:52:52.820159   36163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/ingress-addon-legacy-867324/id_rsa Username:docker}
	I1205 19:52:52.830176   36163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/ingress-addon-legacy-867324/id_rsa Username:docker}
	I1205 19:52:53.051031   36163 ssh_runner.go:195] Run: systemctl --version
	I1205 19:52:53.056188   36163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:52:53.200354   36163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 19:52:53.205802   36163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:52:53.229378   36163 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 19:52:53.229459   36163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:52:53.266014   36163 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 19:52:53.266033   36163 start.go:475] detecting cgroup driver to use...
	I1205 19:52:53.266063   36163 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 19:52:53.266108   36163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:52:53.284243   36163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:52:53.297070   36163 docker.go:203] disabling cri-docker service (if available) ...
	I1205 19:52:53.297129   36163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:52:53.312501   36163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:52:53.328005   36163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:52:53.417460   36163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:52:53.518361   36163 docker.go:219] disabling docker service ...
	I1205 19:52:53.518446   36163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:52:53.539342   36163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:52:53.553674   36163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:52:53.650452   36163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:52:53.750999   36163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:52:53.764399   36163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:52:53.783317   36163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 19:52:53.783420   36163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:52:53.794401   36163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:52:53.794515   36163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:52:53.805648   36163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:52:53.816496   36163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:52:53.827422   36163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:52:53.837580   36163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:52:53.846866   36163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:52:53.856058   36163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:52:53.954567   36163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:52:54.079804   36163 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:52:54.079923   36163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:52:54.084566   36163 start.go:543] Will wait 60s for crictl version
	I1205 19:52:54.084664   36163 ssh_runner.go:195] Run: which crictl
	I1205 19:52:54.089146   36163 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:52:54.134555   36163 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 19:52:54.134671   36163 ssh_runner.go:195] Run: crio --version
	I1205 19:52:54.174840   36163 ssh_runner.go:195] Run: crio --version
	I1205 19:52:54.218984   36163 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1205 19:52:54.220861   36163 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-867324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 19:52:54.237506   36163 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1205 19:52:54.241718   36163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:52:54.253760   36163 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1205 19:52:54.253824   36163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:52:54.300226   36163 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1205 19:52:54.300296   36163 ssh_runner.go:195] Run: which lz4
	I1205 19:52:54.304409   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1205 19:52:54.304495   36163 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 19:52:54.308400   36163 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 19:52:54.308427   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1205 19:52:56.500455   36163 crio.go:444] Took 2.195992 seconds to copy over tarball
	I1205 19:52:56.500564   36163 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 19:52:59.029992   36163 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.52937709s)
	I1205 19:52:59.030016   36163 crio.go:451] Took 2.529499 seconds to extract the tarball
	I1205 19:52:59.030026   36163 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 19:52:59.118210   36163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:52:59.158182   36163 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1205 19:52:59.158202   36163 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 19:52:59.158239   36163 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:52:59.158432   36163 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:52:59.158495   36163 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:52:59.158577   36163 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:52:59.158646   36163 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:52:59.158702   36163 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1205 19:52:59.158755   36163 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1205 19:52:59.158813   36163 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1205 19:52:59.160942   36163 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1205 19:52:59.161251   36163 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:52:59.161398   36163 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:52:59.161509   36163 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:52:59.161616   36163 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:52:59.161722   36163 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:52:59.161936   36163 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1205 19:52:59.162168   36163 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	W1205 19:52:59.493105   36163 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1205 19:52:59.493352   36163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1205 19:52:59.515025   36163 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1205 19:52:59.515438   36163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:52:59.541724   36163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1205 19:52:59.541839   36163 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1205 19:52:59.542055   36163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1205 19:52:59.549745   36163 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1205 19:52:59.549803   36163 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:52:59.549855   36163 ssh_runner.go:195] Run: which crictl
	W1205 19:52:59.551867   36163 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1205 19:52:59.552014   36163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1205 19:52:59.564581   36163 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1205 19:52:59.564749   36163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1205 19:52:59.597097   36163 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1205 19:52:59.597362   36163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1205 19:52:59.603547   36163 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1205 19:52:59.603622   36163 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:52:59.603704   36163 ssh_runner.go:195] Run: which crictl
	I1205 19:52:59.693716   36163 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1205 19:52:59.693802   36163 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 19:52:59.693878   36163 ssh_runner.go:195] Run: which crictl
	I1205 19:52:59.693995   36163 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1205 19:52:59.694040   36163 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1205 19:52:59.694077   36163 ssh_runner.go:195] Run: which crictl
	I1205 19:52:59.694184   36163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:52:59.694298   36163 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1205 19:52:59.694347   36163 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:52:59.694385   36163 ssh_runner.go:195] Run: which crictl
	I1205 19:52:59.717728   36163 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1205 19:52:59.717815   36163 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:52:59.717899   36163 ssh_runner.go:195] Run: which crictl
	I1205 19:52:59.737157   36163 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1205 19:52:59.737256   36163 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1205 19:52:59.737318   36163 ssh_runner.go:195] Run: which crictl
	I1205 19:52:59.737515   36163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:52:59.767382   36163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:52:59.767393   36163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 19:52:59.767467   36163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1205 19:52:59.767532   36163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1205 19:52:59.767576   36163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	W1205 19:52:59.807258   36163 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 19:52:59.807498   36163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:52:59.820535   36163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1205 19:52:59.820612   36163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1205 19:52:59.934448   36163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1205 19:52:59.934509   36163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1205 19:52:59.934546   36163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1205 19:52:59.934589   36163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1205 19:53:00.048325   36163 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 19:53:00.048374   36163 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:53:00.048433   36163 ssh_runner.go:195] Run: which crictl
	I1205 19:53:00.048546   36163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1205 19:53:00.053731   36163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:53:00.117429   36163 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 19:53:00.117509   36163 cache_images.go:92] LoadImages completed in 959.293955ms
	W1205 19:53:00.117574   36163 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I1205 19:53:00.117658   36163 ssh_runner.go:195] Run: crio config
	I1205 19:53:00.175108   36163 cni.go:84] Creating CNI manager for ""
	I1205 19:53:00.175132   36163 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:53:00.175166   36163 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 19:53:00.175186   36163 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-867324 NodeName:ingress-addon-legacy-867324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 19:53:00.175342   36163 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-867324"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:53:00.175421   36163 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-867324 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-867324 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 19:53:00.175486   36163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1205 19:53:00.186665   36163 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:53:00.186744   36163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:53:00.197879   36163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1205 19:53:00.219669   36163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1205 19:53:00.240689   36163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 19:53:00.261401   36163 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1205 19:53:00.265535   36163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:53:00.278194   36163 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324 for IP: 192.168.49.2
	I1205 19:53:00.278228   36163 certs.go:190] acquiring lock for shared ca certs: {Name:mk8ef93a51958e82275f202c3866b092b6aa4ced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:53:00.278359   36163 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key
	I1205 19:53:00.278405   36163 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key
	I1205 19:53:00.278452   36163 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.key
	I1205 19:53:00.278468   36163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt with IP's: []
	I1205 19:53:00.515772   36163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt ...
	I1205 19:53:00.515799   36163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: {Name:mk7ea01c166e95e32ccb3159a67df7aa32e8e5e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:53:00.515988   36163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.key ...
	I1205 19:53:00.516003   36163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.key: {Name:mk32661420a06591b13b47e5ae1c95538d3f9599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:53:00.516095   36163 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.key.dd3b5fb2
	I1205 19:53:00.516113   36163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 19:53:01.074372   36163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.crt.dd3b5fb2 ...
	I1205 19:53:01.074406   36163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.crt.dd3b5fb2: {Name:mkedea3c26eac992c8a20823a9b13b3e97fad2fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:53:01.074585   36163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.key.dd3b5fb2 ...
	I1205 19:53:01.074606   36163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.key.dd3b5fb2: {Name:mkb489c61aa660f04eb72007f0f9a3147efa5939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:53:01.074690   36163 certs.go:337] copying /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.crt
	I1205 19:53:01.074773   36163 certs.go:341] copying /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.key
	I1205 19:53:01.074834   36163 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/proxy-client.key
	I1205 19:53:01.074851   36163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/proxy-client.crt with IP's: []
	I1205 19:53:01.731109   36163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/proxy-client.crt ...
	I1205 19:53:01.731140   36163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/proxy-client.crt: {Name:mk48632962ad10b8310cd8d45eb87465aff28969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:53:01.731318   36163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/proxy-client.key ...
	I1205 19:53:01.731331   36163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/proxy-client.key: {Name:mkea63a4d1963b77fb36c362c0e9a47620022a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:53:01.731405   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:53:01.731427   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:53:01.731441   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:53:01.731456   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:53:01.731471   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:53:01.731483   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:53:01.731498   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:53:01.731510   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:53:01.731571   36163 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/7773.pem (1338 bytes)
	W1205 19:53:01.731613   36163 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/7773_empty.pem, impossibly tiny 0 bytes
	I1205 19:53:01.731630   36163 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:53:01.731656   36163 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:53:01.731683   36163 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:53:01.731711   36163 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem (1679 bytes)
	I1205 19:53:01.731782   36163 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem (1708 bytes)
	I1205 19:53:01.731815   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:53:01.731834   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/7773.pem -> /usr/share/ca-certificates/7773.pem
	I1205 19:53:01.731849   36163 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> /usr/share/ca-certificates/77732.pem
	I1205 19:53:01.732425   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 19:53:01.759730   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:53:01.785177   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:53:01.811398   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:53:01.837262   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:53:01.863355   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 19:53:01.888094   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:53:01.913517   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 19:53:01.939728   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:53:01.965486   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/certs/7773.pem --> /usr/share/ca-certificates/7773.pem (1338 bytes)
	I1205 19:53:01.991568   36163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem --> /usr/share/ca-certificates/77732.pem (1708 bytes)
	I1205 19:53:02.018544   36163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:53:02.038081   36163 ssh_runner.go:195] Run: openssl version
	I1205 19:53:02.044628   36163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:53:02.055367   36163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:53:02.059522   36163 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:53:02.059586   36163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:53:02.067734   36163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:53:02.078476   36163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7773.pem && ln -fs /usr/share/ca-certificates/7773.pem /etc/ssl/certs/7773.pem"
	I1205 19:53:02.089083   36163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7773.pem
	I1205 19:53:02.093490   36163 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/7773.pem
	I1205 19:53:02.093561   36163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7773.pem
	I1205 19:53:02.101828   36163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7773.pem /etc/ssl/certs/51391683.0"
	I1205 19:53:02.112731   36163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77732.pem && ln -fs /usr/share/ca-certificates/77732.pem /etc/ssl/certs/77732.pem"
	I1205 19:53:02.123510   36163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77732.pem
	I1205 19:53:02.128049   36163 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/77732.pem
	I1205 19:53:02.128107   36163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77732.pem
	I1205 19:53:02.136357   36163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77732.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:53:02.147284   36163 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 19:53:02.151452   36163 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 19:53:02.151501   36163 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-867324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-867324 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:53:02.151570   36163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:53:02.151626   36163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:53:02.190123   36163 cri.go:89] found id: ""
	I1205 19:53:02.190204   36163 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:53:02.200326   36163 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:53:02.210063   36163 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1205 19:53:02.210124   36163 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:53:02.219610   36163 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:53:02.219647   36163 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 19:53:02.271983   36163 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1205 19:53:02.272204   36163 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 19:53:02.320279   36163 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1205 19:53:02.320391   36163 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1205 19:53:02.320451   36163 kubeadm.go:322] OS: Linux
	I1205 19:53:02.320513   36163 kubeadm.go:322] CGROUPS_CPU: enabled
	I1205 19:53:02.320584   36163 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1205 19:53:02.320647   36163 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1205 19:53:02.320720   36163 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1205 19:53:02.320783   36163 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1205 19:53:02.320853   36163 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1205 19:53:02.403006   36163 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:53:02.403301   36163 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:53:02.403438   36163 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 19:53:02.616109   36163 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:53:02.617690   36163 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:53:02.617984   36163 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 19:53:02.716130   36163 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:53:02.719658   36163 out.go:204]   - Generating certificates and keys ...
	I1205 19:53:02.719737   36163 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 19:53:02.719817   36163 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 19:53:03.656156   36163 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:53:03.921753   36163 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:53:04.629102   36163 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:53:04.966425   36163 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 19:53:05.238957   36163 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 19:53:05.239360   36163 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-867324 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:53:07.222234   36163 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 19:53:07.222622   36163 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-867324 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1205 19:53:08.101747   36163 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:53:08.261490   36163 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:53:08.923033   36163 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 19:53:08.923408   36163 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:53:09.365941   36163 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:53:09.527215   36163 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:53:10.092369   36163 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:53:10.437989   36163 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:53:10.438761   36163 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:53:10.440930   36163 out.go:204]   - Booting up control plane ...
	I1205 19:53:10.441021   36163 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:53:10.448347   36163 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:53:10.450346   36163 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:53:10.451866   36163 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:53:10.462107   36163 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 19:53:22.464719   36163 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002107 seconds
	I1205 19:53:22.464845   36163 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:53:22.478485   36163 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:53:22.999694   36163 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:53:22.999847   36163 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-867324 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1205 19:53:23.509207   36163 kubeadm.go:322] [bootstrap-token] Using token: ajz6a8.rk0m76ct8hqf6pzt
	I1205 19:53:23.511132   36163 out.go:204]   - Configuring RBAC rules ...
	I1205 19:53:23.511257   36163 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:53:23.515511   36163 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:53:23.539071   36163 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:53:23.548355   36163 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:53:23.559431   36163 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:53:23.563024   36163 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:53:23.583970   36163 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:53:23.884007   36163 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 19:53:23.956121   36163 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 19:53:23.960592   36163 kubeadm.go:322] 
	I1205 19:53:23.960665   36163 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 19:53:23.960677   36163 kubeadm.go:322] 
	I1205 19:53:23.960750   36163 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 19:53:23.960758   36163 kubeadm.go:322] 
	I1205 19:53:23.960782   36163 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 19:53:23.961212   36163 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:53:23.961265   36163 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:53:23.961275   36163 kubeadm.go:322] 
	I1205 19:53:23.961325   36163 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 19:53:23.961398   36163 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:53:23.961465   36163 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:53:23.961472   36163 kubeadm.go:322] 
	I1205 19:53:23.961745   36163 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:53:23.961825   36163 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 19:53:23.961833   36163 kubeadm.go:322] 
	I1205 19:53:23.962078   36163 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ajz6a8.rk0m76ct8hqf6pzt \
	I1205 19:53:23.962185   36163 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 \
	I1205 19:53:23.962382   36163 kubeadm.go:322]     --control-plane 
	I1205 19:53:23.962395   36163 kubeadm.go:322] 
	I1205 19:53:23.962631   36163 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:53:23.962644   36163 kubeadm.go:322] 
	I1205 19:53:23.962910   36163 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ajz6a8.rk0m76ct8hqf6pzt \
	I1205 19:53:23.963185   36163 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 
	I1205 19:53:23.966931   36163 kubeadm.go:322] W1205 19:53:02.271248    1229 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1205 19:53:23.967141   36163 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1205 19:53:23.967242   36163 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:53:23.967363   36163 kubeadm.go:322] W1205 19:53:10.448497    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1205 19:53:23.967482   36163 kubeadm.go:322] W1205 19:53:10.450453    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1205 19:53:23.967499   36163 cni.go:84] Creating CNI manager for ""
	I1205 19:53:23.967507   36163 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:53:23.970873   36163 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:53:23.973598   36163 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:53:23.978671   36163 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1205 19:53:23.978691   36163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 19:53:24.000112   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 19:53:24.453276   36163 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:53:24.453401   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:24.453487   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=ingress-addon-legacy-867324 minikube.k8s.io/updated_at=2023_12_05T19_53_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:24.604812   36163 ops.go:34] apiserver oom_adj: -16
	I1205 19:53:24.604895   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:24.710345   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:25.307001   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:25.806960   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:26.307074   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:26.806491   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:27.307029   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:27.807141   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:28.307288   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:28.806474   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:29.306431   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:29.806952   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:30.307189   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:30.806621   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:31.307047   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:31.807127   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:32.306646   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:32.807337   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:33.307271   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:33.807263   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:34.306561   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:34.806807   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:35.306765   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:35.806757   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:36.306492   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:36.807236   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:37.307431   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:37.807386   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:38.307019   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:38.807414   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:39.306493   36163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:53:39.411840   36163 kubeadm.go:1088] duration metric: took 14.958478963s to wait for elevateKubeSystemPrivileges.
	I1205 19:53:39.411870   36163 kubeadm.go:406] StartCluster complete in 37.260374061s
	I1205 19:53:39.411886   36163 settings.go:142] acquiring lock: {Name:mk9158e056caaf62837361622cedbf37e18c3f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:53:39.411943   36163 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 19:53:39.412775   36163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/kubeconfig: {Name:mka2e3e3347ae085678ba2bb20225628c9c86ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:53:39.413495   36163 kapi.go:59] client config for ingress-addon-legacy-867324: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.key", CAFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:53:39.414539   36163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:53:39.414806   36163 config.go:182] Loaded profile config "ingress-addon-legacy-867324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1205 19:53:39.414858   36163 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 19:53:39.414917   36163 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-867324"
	I1205 19:53:39.414931   36163 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-867324"
	I1205 19:53:39.414978   36163 cert_rotation.go:137] Starting client certificate rotation controller
	I1205 19:53:39.414984   36163 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-867324"
	I1205 19:53:39.414994   36163 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-867324"
	I1205 19:53:39.414980   36163 host.go:66] Checking if "ingress-addon-legacy-867324" exists ...
	I1205 19:53:39.415289   36163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-867324 --format={{.State.Status}}
	I1205 19:53:39.415436   36163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-867324 --format={{.State.Status}}
	I1205 19:53:39.450538   36163 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:53:39.453590   36163 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:53:39.453611   36163 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:53:39.453680   36163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-867324
	I1205 19:53:39.464938   36163 kapi.go:59] client config for ingress-addon-legacy-867324: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.key", CAFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:53:39.465205   36163 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-867324"
	I1205 19:53:39.465236   36163 host.go:66] Checking if "ingress-addon-legacy-867324" exists ...
	I1205 19:53:39.465685   36163 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-867324 --format={{.State.Status}}
	I1205 19:53:39.482956   36163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/ingress-addon-legacy-867324/id_rsa Username:docker}
	I1205 19:53:39.502102   36163 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:53:39.502127   36163 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:53:39.502193   36163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-867324
	I1205 19:53:39.527136   36163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/ingress-addon-legacy-867324/id_rsa Username:docker}
	I1205 19:53:39.569627   36163 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-867324" context rescaled to 1 replicas
	I1205 19:53:39.569669   36163 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:53:39.577078   36163 out.go:177] * Verifying Kubernetes components...
	I1205 19:53:39.579518   36163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:53:39.676827   36163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:53:39.677459   36163 kapi.go:59] client config for ingress-addon-legacy-867324: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.key", CAFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:53:39.677726   36163 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-867324" to be "Ready" ...
	I1205 19:53:39.692553   36163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:53:39.715409   36163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:53:40.155726   36163 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1205 19:53:40.318219   36163 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1205 19:53:40.320264   36163 addons.go:502] enable addons completed in 905.417051ms: enabled=[default-storageclass storage-provisioner]
	I1205 19:53:41.930175   36163 node_ready.go:58] node "ingress-addon-legacy-867324" has status "Ready":"False"
	I1205 19:53:44.424920   36163 node_ready.go:58] node "ingress-addon-legacy-867324" has status "Ready":"False"
	I1205 19:53:46.425343   36163 node_ready.go:58] node "ingress-addon-legacy-867324" has status "Ready":"False"
	I1205 19:53:47.424227   36163 node_ready.go:49] node "ingress-addon-legacy-867324" has status "Ready":"True"
	I1205 19:53:47.424255   36163 node_ready.go:38] duration metric: took 7.746506656s waiting for node "ingress-addon-legacy-867324" to be "Ready" ...
	I1205 19:53:47.424265   36163 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:53:47.433757   36163 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-dmwtz" in "kube-system" namespace to be "Ready" ...
	I1205 19:53:49.440422   36163 pod_ready.go:102] pod "coredns-66bff467f8-dmwtz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-05 19:53:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1205 19:53:51.440631   36163 pod_ready.go:102] pod "coredns-66bff467f8-dmwtz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-05 19:53:39 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1205 19:53:53.442405   36163 pod_ready.go:102] pod "coredns-66bff467f8-dmwtz" in "kube-system" namespace has status "Ready":"False"
	I1205 19:53:55.443147   36163 pod_ready.go:102] pod "coredns-66bff467f8-dmwtz" in "kube-system" namespace has status "Ready":"False"
	I1205 19:53:57.943187   36163 pod_ready.go:102] pod "coredns-66bff467f8-dmwtz" in "kube-system" namespace has status "Ready":"False"
	I1205 19:54:00.443543   36163 pod_ready.go:102] pod "coredns-66bff467f8-dmwtz" in "kube-system" namespace has status "Ready":"False"
	I1205 19:54:00.943255   36163 pod_ready.go:92] pod "coredns-66bff467f8-dmwtz" in "kube-system" namespace has status "Ready":"True"
	I1205 19:54:00.943281   36163 pod_ready.go:81] duration metric: took 13.509497167s waiting for pod "coredns-66bff467f8-dmwtz" in "kube-system" namespace to be "Ready" ...
	I1205 19:54:00.943292   36163 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-867324" in "kube-system" namespace to be "Ready" ...
	I1205 19:54:00.947362   36163 pod_ready.go:92] pod "etcd-ingress-addon-legacy-867324" in "kube-system" namespace has status "Ready":"True"
	I1205 19:54:00.947389   36163 pod_ready.go:81] duration metric: took 4.089209ms waiting for pod "etcd-ingress-addon-legacy-867324" in "kube-system" namespace to be "Ready" ...
	I1205 19:54:00.947404   36163 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-867324" in "kube-system" namespace to be "Ready" ...
	I1205 19:54:00.951543   36163 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-867324" in "kube-system" namespace has status "Ready":"True"
	I1205 19:54:00.951568   36163 pod_ready.go:81] duration metric: took 4.152217ms waiting for pod "kube-apiserver-ingress-addon-legacy-867324" in "kube-system" namespace to be "Ready" ...
	I1205 19:54:00.951579   36163 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-867324" in "kube-system" namespace to be "Ready" ...
	I1205 19:54:00.955599   36163 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-867324" in "kube-system" namespace has status "Ready":"True"
	I1205 19:54:00.955619   36163 pod_ready.go:81] duration metric: took 4.03233ms waiting for pod "kube-controller-manager-ingress-addon-legacy-867324" in "kube-system" namespace to be "Ready" ...
	I1205 19:54:00.955630   36163 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cwztp" in "kube-system" namespace to be "Ready" ...
	I1205 19:54:00.959771   36163 pod_ready.go:92] pod "kube-proxy-cwztp" in "kube-system" namespace has status "Ready":"True"
	I1205 19:54:00.959788   36163 pod_ready.go:81] duration metric: took 4.151799ms waiting for pod "kube-proxy-cwztp" in "kube-system" namespace to be "Ready" ...
	I1205 19:54:00.959798   36163 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-867324" in "kube-system" namespace to be "Ready" ...
	I1205 19:54:01.138774   36163 request.go:629] Waited for 178.882179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-867324
	I1205 19:54:01.338655   36163 request.go:629] Waited for 197.337864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-867324
	I1205 19:54:01.341388   36163 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-867324" in "kube-system" namespace has status "Ready":"True"
	I1205 19:54:01.341410   36163 pod_ready.go:81] duration metric: took 381.58018ms waiting for pod "kube-scheduler-ingress-addon-legacy-867324" in "kube-system" namespace to be "Ready" ...
	I1205 19:54:01.341424   36163 pod_ready.go:38] duration metric: took 13.917141s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:54:01.341446   36163 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:54:01.341510   36163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:54:01.354094   36163 api_server.go:72] duration metric: took 21.784397238s to wait for apiserver process to appear ...
	I1205 19:54:01.354112   36163 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:54:01.354127   36163 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1205 19:54:01.362700   36163 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1205 19:54:01.363554   36163 api_server.go:141] control plane version: v1.18.20
	I1205 19:54:01.363575   36163 api_server.go:131] duration metric: took 9.457009ms to wait for apiserver health ...
	I1205 19:54:01.363583   36163 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:54:01.538928   36163 request.go:629] Waited for 175.281639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1205 19:54:01.544893   36163 system_pods.go:59] 8 kube-system pods found
	I1205 19:54:01.544927   36163 system_pods.go:61] "coredns-66bff467f8-dmwtz" [71743a45-268b-4d21-8c19-2a4a002100a1] Running
	I1205 19:54:01.544934   36163 system_pods.go:61] "etcd-ingress-addon-legacy-867324" [1078ddb3-de69-4549-a59c-819e3efc4819] Running
	I1205 19:54:01.544962   36163 system_pods.go:61] "kindnet-pm5ch" [2e7d508e-fbf8-4c88-8f51-5a779811fd54] Running
	I1205 19:54:01.544975   36163 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-867324" [84f4e155-9801-4bef-b1a8-c905c2c25670] Running
	I1205 19:54:01.544984   36163 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-867324" [286eb34a-1174-4990-8cb7-7b497df2f284] Running
	I1205 19:54:01.544993   36163 system_pods.go:61] "kube-proxy-cwztp" [2cc9a429-32f8-4343-937d-430b1cb59ebb] Running
	I1205 19:54:01.544998   36163 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-867324" [2f1df0d1-71e9-4a20-888b-e36d9054cc7b] Running
	I1205 19:54:01.545006   36163 system_pods.go:61] "storage-provisioner" [62f20dfe-eea2-4d39-bbbc-cd11ff4050bf] Running
	I1205 19:54:01.545012   36163 system_pods.go:74] duration metric: took 181.42342ms to wait for pod list to return data ...
	I1205 19:54:01.545020   36163 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:54:01.739124   36163 request.go:629] Waited for 194.03139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:54:01.741555   36163 default_sa.go:45] found service account: "default"
	I1205 19:54:01.741580   36163 default_sa.go:55] duration metric: took 196.545406ms for default service account to be created ...
	I1205 19:54:01.741589   36163 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:54:01.938967   36163 request.go:629] Waited for 197.320264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1205 19:54:01.944827   36163 system_pods.go:86] 8 kube-system pods found
	I1205 19:54:01.944856   36163 system_pods.go:89] "coredns-66bff467f8-dmwtz" [71743a45-268b-4d21-8c19-2a4a002100a1] Running
	I1205 19:54:01.944863   36163 system_pods.go:89] "etcd-ingress-addon-legacy-867324" [1078ddb3-de69-4549-a59c-819e3efc4819] Running
	I1205 19:54:01.944868   36163 system_pods.go:89] "kindnet-pm5ch" [2e7d508e-fbf8-4c88-8f51-5a779811fd54] Running
	I1205 19:54:01.944874   36163 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-867324" [84f4e155-9801-4bef-b1a8-c905c2c25670] Running
	I1205 19:54:01.944879   36163 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-867324" [286eb34a-1174-4990-8cb7-7b497df2f284] Running
	I1205 19:54:01.944885   36163 system_pods.go:89] "kube-proxy-cwztp" [2cc9a429-32f8-4343-937d-430b1cb59ebb] Running
	I1205 19:54:01.944890   36163 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-867324" [2f1df0d1-71e9-4a20-888b-e36d9054cc7b] Running
	I1205 19:54:01.944900   36163 system_pods.go:89] "storage-provisioner" [62f20dfe-eea2-4d39-bbbc-cd11ff4050bf] Running
	I1205 19:54:01.944906   36163 system_pods.go:126] duration metric: took 203.312685ms to wait for k8s-apps to be running ...
	I1205 19:54:01.944918   36163 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:54:01.944974   36163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:54:01.958049   36163 system_svc.go:56] duration metric: took 13.121387ms WaitForService to wait for kubelet.
	I1205 19:54:01.958071   36163 kubeadm.go:581] duration metric: took 22.388380043s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 19:54:01.958089   36163 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:54:02.138414   36163 request.go:629] Waited for 180.254203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1205 19:54:02.141135   36163 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 19:54:02.141170   36163 node_conditions.go:123] node cpu capacity is 2
	I1205 19:54:02.141185   36163 node_conditions.go:105] duration metric: took 183.087631ms to run NodePressure ...
	I1205 19:54:02.141216   36163 start.go:228] waiting for startup goroutines ...
	I1205 19:54:02.141230   36163 start.go:233] waiting for cluster config update ...
	I1205 19:54:02.141240   36163 start.go:242] writing updated cluster config ...
	I1205 19:54:02.141520   36163 ssh_runner.go:195] Run: rm -f paused
	I1205 19:54:02.202652   36163 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1205 19:54:02.205200   36163 out.go:177] 
	W1205 19:54:02.207156   36163 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1205 19:54:02.208995   36163 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1205 19:54:02.211284   36163 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-867324" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.266372518Z" level=info msg="Stopped container 9992b17c55f9e61c87b98d385a30f9996531fcb733c7dc87918bc80bd3207b24: ingress-nginx/ingress-nginx-controller-7fcf777cb7-w2sv2/controller" id=7590aa62-b0a3-48df-a221-e1598621c0b9 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.267022854Z" level=info msg="Stopping pod sandbox: 636b320554e938a0a7852c19784d100e23cb8c77790ae7f00e8b3ce4883105d6" id=34c7c463-3a5d-4687-b244-afb6ec88f0eb name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.267330179Z" level=info msg="Stopped container 9992b17c55f9e61c87b98d385a30f9996531fcb733c7dc87918bc80bd3207b24: ingress-nginx/ingress-nginx-controller-7fcf777cb7-w2sv2/controller" id=bff397f0-28ec-437c-b243-fe7442bbfe2e name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.267715569Z" level=info msg="Stopping pod sandbox: 636b320554e938a0a7852c19784d100e23cb8c77790ae7f00e8b3ce4883105d6" id=e8a1a45a-75d6-461f-8b69-b2211e8fabc7 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.270438909Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-FIWMPQLU54FI4562 - [0:0]\n:KUBE-HP-PKYK7JQLLIIBFMIQ - [0:0]\n-X KUBE-HP-FIWMPQLU54FI4562\n-X KUBE-HP-PKYK7JQLLIIBFMIQ\nCOMMIT\n"
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.271994188Z" level=info msg="Closing host port tcp:80"
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.272034886Z" level=info msg="Closing host port tcp:443"
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.273119810Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.273147084Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.273292070Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-w2sv2 Namespace:ingress-nginx ID:636b320554e938a0a7852c19784d100e23cb8c77790ae7f00e8b3ce4883105d6 UID:fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5 NetNS:/var/run/netns/02ddea03-86eb-4260-8958-4aa9ab6a65ac Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.273431592Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-w2sv2 from CNI network \"kindnet\" (type=ptp)"
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.305269027Z" level=info msg="Stopped pod sandbox: 636b320554e938a0a7852c19784d100e23cb8c77790ae7f00e8b3ce4883105d6" id=34c7c463-3a5d-4687-b244-afb6ec88f0eb name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.305375678Z" level=info msg="Stopped pod sandbox (already stopped): 636b320554e938a0a7852c19784d100e23cb8c77790ae7f00e8b3ce4883105d6" id=e8a1a45a-75d6-461f-8b69-b2211e8fabc7 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.309233911Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=d007767c-476c-4698-8ae7-94a84b8d9571 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.309423050Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=d007767c-476c-4698-8ae7-94a84b8d9571 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.310475424Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=8099b3a8-0ce0-45cb-9aa4-52581d0e4b5e name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.310631971Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8099b3a8-0ce0-45cb-9aa4-52581d0e4b5e name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.311315579Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-kq64r/hello-world-app" id=a6cff922-c6e3-4816-9e3a-32e0bd3483ec name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.311395391Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.402107655Z" level=info msg="Created container 2a6246166b3e5175c67029b50e934357d90bac6dc93caf4f10cb9c2592243184: default/hello-world-app-5f5d8b66bb-kq64r/hello-world-app" id=a6cff922-c6e3-4816-9e3a-32e0bd3483ec name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.403117773Z" level=info msg="Starting container: 2a6246166b3e5175c67029b50e934357d90bac6dc93caf4f10cb9c2592243184" id=b74842a2-6c03-4039-966c-85fc6a8b0f69 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Dec 05 19:57:03 ingress-addon-legacy-867324 conmon[3772]: conmon 2a6246166b3e5175c670 <ninfo>: container 3783 exited with status 1
	Dec 05 19:57:03 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:03.418962090Z" level=info msg="Started container" PID=3783 containerID=2a6246166b3e5175c67029b50e934357d90bac6dc93caf4f10cb9c2592243184 description=default/hello-world-app-5f5d8b66bb-kq64r/hello-world-app id=b74842a2-6c03-4039-966c-85fc6a8b0f69 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=755862bf182996a960c6c93f3fb14c56a1bb1bd0c08e5934cb14142cd71831ea
	Dec 05 19:57:04 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:04.159011746Z" level=info msg="Removing container: 2fa2e1dee5b3ec7bceca300a38348e2e07d58f3c860c0871dc2d7dcb731f3bd7" id=3e5da889-b8c6-4c89-8d81-bd6a4da798df name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Dec 05 19:57:04 ingress-addon-legacy-867324 crio[896]: time="2023-12-05 19:57:04.182217876Z" level=info msg="Removed container 2fa2e1dee5b3ec7bceca300a38348e2e07d58f3c860c0871dc2d7dcb731f3bd7: default/hello-world-app-5f5d8b66bb-kq64r/hello-world-app" id=3e5da889-b8c6-4c89-8d81-bd6a4da798df name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2a6246166b3e5       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   5 seconds ago       Exited              hello-world-app           2                   755862bf18299       hello-world-app-5f5d8b66bb-kq64r
	74b5c5156b7bd       docker.io/library/nginx@sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7                    2 minutes ago       Running             nginx                     0                   bee2ceb781b55       nginx
	9992b17c55f9e       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   636b320554e93       ingress-nginx-controller-7fcf777cb7-w2sv2
	48afd1cfe2541       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   2fe48cd4959e6       ingress-nginx-admission-patch-9hmj4
	6673601e6d1f7       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   73daac4361c84       ingress-nginx-admission-create-68n8w
	ef33a6213d74e       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   2248e30b69e87       coredns-66bff467f8-dmwtz
	17ae2ffa61543       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   3cbd83241d3fc       storage-provisioner
	c197cb904ce75       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   c96b686df16a8       kindnet-pm5ch
	f448bcda670ee       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   7a570811eb257       kube-proxy-cwztp
	ca7e39e9b590f       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   da212cec2efa2       kube-controller-manager-ingress-addon-legacy-867324
	4f69ca44de4c8       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   a31592702b50e       etcd-ingress-addon-legacy-867324
	e397bbb7ae4b4       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   ae7f9d166d3f2       kube-scheduler-ingress-addon-legacy-867324
	62d9f70edfd86       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   4c07c39e170da       kube-apiserver-ingress-addon-legacy-867324
	
	* 
	* ==> coredns [ef33a6213d74eb467aa01060739a39b77d02e796e63862903b1978e8b11cbc90] <==
	* [INFO] 10.244.0.5:35668 - 13952 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00194762s
	[INFO] 10.244.0.5:36493 - 2698 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041961s
	[INFO] 10.244.0.5:36493 - 32574 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003906644s
	[INFO] 10.244.0.5:35668 - 45001 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004390077s
	[INFO] 10.244.0.5:36493 - 59427 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00058709s
	[INFO] 10.244.0.5:35668 - 25817 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000109277s
	[INFO] 10.244.0.5:36493 - 34239 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005189s
	[INFO] 10.244.0.5:39499 - 31730 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000078631s
	[INFO] 10.244.0.5:55535 - 10148 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037334s
	[INFO] 10.244.0.5:55535 - 15955 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037063s
	[INFO] 10.244.0.5:55535 - 15518 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033124s
	[INFO] 10.244.0.5:55535 - 36995 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032911s
	[INFO] 10.244.0.5:55535 - 9327 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032132s
	[INFO] 10.244.0.5:55535 - 40246 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031672s
	[INFO] 10.244.0.5:39499 - 55682 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037916s
	[INFO] 10.244.0.5:39499 - 4199 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035644s
	[INFO] 10.244.0.5:39499 - 62975 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030728s
	[INFO] 10.244.0.5:55535 - 43844 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001078098s
	[INFO] 10.244.0.5:39499 - 58683 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037834s
	[INFO] 10.244.0.5:39499 - 47269 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036332s
	[INFO] 10.244.0.5:55535 - 33909 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001016994s
	[INFO] 10.244.0.5:55535 - 20114 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051315s
	[INFO] 10.244.0.5:39499 - 54568 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000736121s
	[INFO] 10.244.0.5:39499 - 19932 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000760745s
	[INFO] 10.244.0.5:39499 - 37638 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004032s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-867324
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-867324
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=ingress-addon-legacy-867324
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T19_53_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 19:53:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-867324
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 19:57:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 19:56:57 +0000   Tue, 05 Dec 2023 19:53:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 19:56:57 +0000   Tue, 05 Dec 2023 19:53:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 19:56:57 +0000   Tue, 05 Dec 2023 19:53:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 19:56:57 +0000   Tue, 05 Dec 2023 19:53:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-867324
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 1807a43f0121429eb4ed97b6808d4c0e
	  System UUID:                63a7de45-3477-48c8-823d-c8d8ed0d5198
	  Boot ID:                    ade55ee8-b6ef-4756-8af5-2453aa07c908
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-kq64r                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-dmwtz                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m30s
	  kube-system                 etcd-ingress-addon-legacy-867324                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kindnet-pm5ch                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m30s
	  kube-system                 kube-apiserver-ingress-addon-legacy-867324             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-867324    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-proxy-cwztp                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kube-scheduler-ingress-addon-legacy-867324             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m56s (x4 over 3m56s)  kubelet     Node ingress-addon-legacy-867324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x5 over 3m56s)  kubelet     Node ingress-addon-legacy-867324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x4 over 3m56s)  kubelet     Node ingress-addon-legacy-867324 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m42s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m42s                  kubelet     Node ingress-addon-legacy-867324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s                  kubelet     Node ingress-addon-legacy-867324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s                  kubelet     Node ingress-addon-legacy-867324 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m29s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m22s                  kubelet     Node ingress-addon-legacy-867324 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000746] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000998] FS-Cache: N-cookie d=000000005352abd4{9p.inode} n=0000000092935cee
	[  +0.001126] FS-Cache: N-key=[8] '7d6ced0000000000'
	[  +0.003128] FS-Cache: Duplicate cookie detected
	[  +0.000743] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001056] FS-Cache: O-cookie d=000000005352abd4{9p.inode} n=0000000096f14bef
	[  +0.001106] FS-Cache: O-key=[8] '7d6ced0000000000'
	[  +0.000771] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000986] FS-Cache: N-cookie d=000000005352abd4{9p.inode} n=0000000087309102
	[  +0.001095] FS-Cache: N-key=[8] '7d6ced0000000000'
	[  +2.579818] FS-Cache: Duplicate cookie detected
	[  +0.000741] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001012] FS-Cache: O-cookie d=000000005352abd4{9p.inode} n=00000000c814e0ec
	[  +0.001118] FS-Cache: O-key=[8] '7c6ced0000000000'
	[  +0.000756] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=000000005352abd4{9p.inode} n=0000000095aba95a
	[  +0.001110] FS-Cache: N-key=[8] '7c6ced0000000000'
	[  +0.433599] FS-Cache: Duplicate cookie detected
	[  +0.000720] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001003] FS-Cache: O-cookie d=000000005352abd4{9p.inode} n=00000000851860cc
	[  +0.001106] FS-Cache: O-key=[8] '876ced0000000000'
	[  +0.000728] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=000000005352abd4{9p.inode} n=0000000092935cee
	[  +0.001081] FS-Cache: N-key=[8] '876ced0000000000'
	[Dec 5 19:52] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [4f69ca44de4c89d296d0f0a73c721a1a623a079ba863c41f07249501285dba93] <==
	* raft2023/12/05 19:53:15 INFO: aec36adc501070cc became follower at term 0
	raft2023/12/05 19:53:15 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/05 19:53:15 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/05 19:53:15 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-05 19:53:16.055893 W | auth: simple token is not cryptographically signed
	2023-12-05 19:53:16.372386 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-05 19:53:16.409968 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-05 19:53:16.598762 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/12/05 19:53:16 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-05 19:53:16.599056 I | embed: listening for peers on 192.168.49.2:2380
	2023-12-05 19:53:16.599141 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-05 19:53:16.599194 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/12/05 19:53:16 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/05 19:53:16 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/05 19:53:16 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/05 19:53:16 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/05 19:53:16 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-05 19:53:16.783796 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-05 19:53:16.924393 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-05 19:53:16.975855 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-05 19:53:16.983816 I | embed: ready to serve client requests
	2023-12-05 19:53:16.993915 I | embed: ready to serve client requests
	2023-12-05 19:53:17.003866 I | etcdserver: published {Name:ingress-addon-legacy-867324 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-05 19:53:17.034342 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-05 19:53:17.041999 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  19:57:09 up 39 min,  0 users,  load average: 0.22, 1.01, 0.83
	Linux ingress-addon-legacy-867324 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [c197cb904ce75b0cca913719b74ffd5757958a8d3712003b9990ee80bb903bbb] <==
	* I1205 19:55:02.505462       1 main.go:227] handling current node
	I1205 19:55:12.509237       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:55:12.509268       1 main.go:227] handling current node
	I1205 19:55:22.516766       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:55:22.516792       1 main.go:227] handling current node
	I1205 19:55:32.526477       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:55:32.526505       1 main.go:227] handling current node
	I1205 19:55:42.533813       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:55:42.533842       1 main.go:227] handling current node
	I1205 19:55:52.548397       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:55:52.548422       1 main.go:227] handling current node
	I1205 19:56:02.554314       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:56:02.554346       1 main.go:227] handling current node
	I1205 19:56:12.558645       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:56:12.558673       1 main.go:227] handling current node
	I1205 19:56:22.561603       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:56:22.561629       1 main.go:227] handling current node
	I1205 19:56:32.566542       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:56:32.566571       1 main.go:227] handling current node
	I1205 19:56:42.580284       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:56:42.580438       1 main.go:227] handling current node
	I1205 19:56:52.597515       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:56:52.597716       1 main.go:227] handling current node
	I1205 19:57:02.607745       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1205 19:57:02.607872       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [62d9f70edfd861a6881dd5bb89482d5270181346ee08ddb8ae2d048154c79dc7] <==
	* I1205 19:53:21.121706       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E1205 19:53:21.129763       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1205 19:53:21.205746       1 cache.go:39] Caches are synced for autoregister controller
	I1205 19:53:21.206280       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 19:53:21.207803       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1205 19:53:21.207830       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1205 19:53:21.225196       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1205 19:53:22.004619       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1205 19:53:22.004662       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1205 19:53:22.010388       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1205 19:53:22.014409       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1205 19:53:22.014428       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1205 19:53:22.362472       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 19:53:22.403438       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1205 19:53:22.566740       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1205 19:53:22.567929       1 controller.go:609] quota admission added evaluator for: endpoints
	I1205 19:53:22.571471       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 19:53:22.882745       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 19:53:23.431997       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1205 19:53:23.869222       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1205 19:53:23.916366       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1205 19:53:39.031619       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1205 19:53:39.460087       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1205 19:54:03.089516       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1205 19:54:23.917951       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [ca7e39e9b590ffc24634743a5548dbad8828425a35ecf2a2ce2edde4abedd663] <==
	* I1205 19:53:39.453765       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1205 19:53:39.549423       1 shared_informer.go:230] Caches are synced for stateful set 
	I1205 19:53:39.558685       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"fcb61ff5-b158-4ea4-bb86-97603c09a16d", APIVersion:"apps/v1", ResourceVersion:"347", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1205 19:53:39.580929       1 shared_informer.go:230] Caches are synced for endpoint 
	I1205 19:53:39.591402       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1205 19:53:39.591424       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1205 19:53:39.591508       1 shared_informer.go:230] Caches are synced for resource quota 
	I1205 19:53:39.591555       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1205 19:53:39.606021       1 shared_informer.go:230] Caches are synced for disruption 
	I1205 19:53:39.606098       1 disruption.go:339] Sending events to api server.
	I1205 19:53:39.606970       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"57c7f69f-6393-40ed-a552-802fc792f1ed", APIVersion:"apps/v1", ResourceVersion:"222", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-pm5ch
	I1205 19:53:39.637466       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"45708d45-5ae1-4b4f-a70b-b8c4b86b369a", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-cwztp
	I1205 19:53:39.633465       1 shared_informer.go:230] Caches are synced for resource quota 
	I1205 19:53:39.680596       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"2aec73e7-2604-41a3-891e-1bbbdba28ab9", APIVersion:"apps/v1", ResourceVersion:"348", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-4qbmt
	I1205 19:53:39.699819       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1205 19:53:49.240121       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1205 19:54:03.057820       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"51a94470-5bcc-4e8a-9595-5783780cbd7a", APIVersion:"apps/v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1205 19:54:03.107714       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"15836d87-6022-4085-a1e7-5605b6212922", APIVersion:"apps/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-w2sv2
	I1205 19:54:03.184378       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1d18e9e9-2a7b-4f8b-809e-95312a04ce1a", APIVersion:"batch/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-68n8w
	I1205 19:54:03.239623       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"7053d714-286f-44d7-a199-2b56e0381987", APIVersion:"batch/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-9hmj4
	I1205 19:54:07.481090       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1d18e9e9-2a7b-4f8b-809e-95312a04ce1a", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1205 19:54:07.494679       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"7053d714-286f-44d7-a199-2b56e0381987", APIVersion:"batch/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1205 19:56:42.306047       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"f07f1e8d-ce5c-450e-8bd5-d471208f1474", APIVersion:"apps/v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1205 19:56:42.335007       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"8e047429-bfe4-453b-aeec-b8f9f853ddcb", APIVersion:"apps/v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-kq64r
	E1205 19:57:05.756172       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-rkkcm" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [f448bcda670eea0ffd677d3eb0603f6d57286ef5146183a3cb28c3ae6a64db4b] <==
	* W1205 19:53:40.405564       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1205 19:53:40.416497       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1205 19:53:40.416616       1 server_others.go:186] Using iptables Proxier.
	I1205 19:53:40.416931       1 server.go:583] Version: v1.18.20
	I1205 19:53:40.417952       1 config.go:315] Starting service config controller
	I1205 19:53:40.418025       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1205 19:53:40.418141       1 config.go:133] Starting endpoints config controller
	I1205 19:53:40.418211       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1205 19:53:40.518243       1 shared_informer.go:230] Caches are synced for service config 
	I1205 19:53:40.518337       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [e397bbb7ae4b44793f3bbce2505a607c92dcf3ecad854559eac133db71e4d22b] <==
	* W1205 19:53:21.176788       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 19:53:21.212764       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1205 19:53:21.212851       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1205 19:53:21.215456       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 19:53:21.215495       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 19:53:21.216100       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1205 19:53:21.216205       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1205 19:53:21.219220       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 19:53:21.219344       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:53:21.219459       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 19:53:21.219671       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:53:21.220456       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:53:21.220460       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 19:53:21.220543       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:53:21.220611       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:53:21.220661       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:53:21.220676       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:53:21.220737       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:53:21.220835       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:53:22.100279       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:53:22.111063       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:53:22.157650       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1205 19:53:22.615665       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1205 19:53:39.075696       1 factory.go:503] pod: kube-system/coredns-66bff467f8-4qbmt is already present in the active queue
	E1205 19:53:40.329461       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* Dec 05 19:56:44 ingress-addon-legacy-867324 kubelet[1642]: E1205 19:56:44.327596    1642 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 05 19:56:44 ingress-addon-legacy-867324 kubelet[1642]: E1205 19:56:44.327658    1642 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 05 19:56:44 ingress-addon-legacy-867324 kubelet[1642]: E1205 19:56:44.327704    1642 pod_workers.go:191] Error syncing pod 4fbb73d7-bcb3-42c9-9de9-30fa60e58412 ("kube-ingress-dns-minikube_kube-system(4fbb73d7-bcb3-42c9-9de9-30fa60e58412)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 05 19:56:46 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:56:46.127481    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fc0faa4bb32fa6cb71e435210c99594fee01a390dd17bb85689caadb1db13f31
	Dec 05 19:56:47 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:56:47.130008    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fc0faa4bb32fa6cb71e435210c99594fee01a390dd17bb85689caadb1db13f31
	Dec 05 19:56:47 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:56:47.130233    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2fa2e1dee5b3ec7bceca300a38348e2e07d58f3c860c0871dc2d7dcb731f3bd7
	Dec 05 19:56:47 ingress-addon-legacy-867324 kubelet[1642]: E1205 19:56:47.130453    1642 pod_workers.go:191] Error syncing pod fe802f4a-a0d2-48d1-b51d-21c374dd8286 ("hello-world-app-5f5d8b66bb-kq64r_default(fe802f4a-a0d2-48d1-b51d-21c374dd8286)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-kq64r_default(fe802f4a-a0d2-48d1-b51d-21c374dd8286)"
	Dec 05 19:56:48 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:56:48.132881    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2fa2e1dee5b3ec7bceca300a38348e2e07d58f3c860c0871dc2d7dcb731f3bd7
	Dec 05 19:56:48 ingress-addon-legacy-867324 kubelet[1642]: E1205 19:56:48.133154    1642 pod_workers.go:191] Error syncing pod fe802f4a-a0d2-48d1-b51d-21c374dd8286 ("hello-world-app-5f5d8b66bb-kq64r_default(fe802f4a-a0d2-48d1-b51d-21c374dd8286)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-kq64r_default(fe802f4a-a0d2-48d1-b51d-21c374dd8286)"
	Dec 05 19:56:58 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:56:58.364243    1642 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-jr9rj" (UniqueName: "kubernetes.io/secret/4fbb73d7-bcb3-42c9-9de9-30fa60e58412-minikube-ingress-dns-token-jr9rj") pod "4fbb73d7-bcb3-42c9-9de9-30fa60e58412" (UID: "4fbb73d7-bcb3-42c9-9de9-30fa60e58412")
	Dec 05 19:56:58 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:56:58.368462    1642 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fbb73d7-bcb3-42c9-9de9-30fa60e58412-minikube-ingress-dns-token-jr9rj" (OuterVolumeSpecName: "minikube-ingress-dns-token-jr9rj") pod "4fbb73d7-bcb3-42c9-9de9-30fa60e58412" (UID: "4fbb73d7-bcb3-42c9-9de9-30fa60e58412"). InnerVolumeSpecName "minikube-ingress-dns-token-jr9rj". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 05 19:56:58 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:56:58.464652    1642 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-jr9rj" (UniqueName: "kubernetes.io/secret/4fbb73d7-bcb3-42c9-9de9-30fa60e58412-minikube-ingress-dns-token-jr9rj") on node "ingress-addon-legacy-867324" DevicePath ""
	Dec 05 19:57:01 ingress-addon-legacy-867324 kubelet[1642]: E1205 19:57:01.067841    1642 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-w2sv2.179e07bdd31cc25d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-w2sv2", UID:"fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5", APIVersion:"v1", ResourceVersion:"481", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-867324"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc153fe0343db805d, ext:217246947807, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc153fe0343db805d, ext:217246947807, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-w2sv2.179e07bdd31cc25d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 05 19:57:01 ingress-addon-legacy-867324 kubelet[1642]: E1205 19:57:01.082889    1642 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-w2sv2.179e07bdd31cc25d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-w2sv2", UID:"fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5", APIVersion:"v1", ResourceVersion:"481", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-867324"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc153fe0343db805d, ext:217246947807, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc153fe03447c2343, ext:217257475277, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-w2sv2.179e07bdd31cc25d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 05 19:57:03 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:57:03.308187    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2fa2e1dee5b3ec7bceca300a38348e2e07d58f3c860c0871dc2d7dcb731f3bd7
	Dec 05 19:57:04 ingress-addon-legacy-867324 kubelet[1642]: W1205 19:57:04.155844    1642 pod_container_deletor.go:77] Container "636b320554e938a0a7852c19784d100e23cb8c77790ae7f00e8b3ce4883105d6" not found in pod's containers
	Dec 05 19:57:04 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:57:04.157245    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2fa2e1dee5b3ec7bceca300a38348e2e07d58f3c860c0871dc2d7dcb731f3bd7
	Dec 05 19:57:04 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:57:04.157466    1642 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2a6246166b3e5175c67029b50e934357d90bac6dc93caf4f10cb9c2592243184
	Dec 05 19:57:04 ingress-addon-legacy-867324 kubelet[1642]: E1205 19:57:04.157702    1642 pod_workers.go:191] Error syncing pod fe802f4a-a0d2-48d1-b51d-21c374dd8286 ("hello-world-app-5f5d8b66bb-kq64r_default(fe802f4a-a0d2-48d1-b51d-21c374dd8286)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-kq64r_default(fe802f4a-a0d2-48d1-b51d-21c374dd8286)"
	Dec 05 19:57:05 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:57:05.280866    1642 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-tv9sp" (UniqueName: "kubernetes.io/secret/fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5-ingress-nginx-token-tv9sp") pod "fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5" (UID: "fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5")
	Dec 05 19:57:05 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:57:05.280965    1642 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5-webhook-cert") pod "fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5" (UID: "fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5")
	Dec 05 19:57:05 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:57:05.286596    1642 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5-ingress-nginx-token-tv9sp" (OuterVolumeSpecName: "ingress-nginx-token-tv9sp") pod "fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5" (UID: "fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5"). InnerVolumeSpecName "ingress-nginx-token-tv9sp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 05 19:57:05 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:57:05.287141    1642 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5" (UID: "fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 05 19:57:05 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:57:05.381238    1642 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5-webhook-cert") on node "ingress-addon-legacy-867324" DevicePath ""
	Dec 05 19:57:05 ingress-addon-legacy-867324 kubelet[1642]: I1205 19:57:05.381287    1642 reconciler.go:319] Volume detached for volume "ingress-nginx-token-tv9sp" (UniqueName: "kubernetes.io/secret/fb64cff5-9e3c-4c19-8c20-1e1e8b91b9a5-ingress-nginx-token-tv9sp") on node "ingress-addon-legacy-867324" DevicePath ""
	
	* 
	* ==> storage-provisioner [17ae2ffa615433b8050fff435b063de0bfc154cdec0819e4e385abcfab8b9462] <==
	* I1205 19:53:52.150665       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:53:52.184413       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:53:52.184590       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:53:52.201562       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:53:52.202767       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-867324_5033460f-1cb0-4155-9af6-9305dc744bf0!
	I1205 19:53:52.203749       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bec34f4e-1d33-45a3-a2b2-b7607e366b0a", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-867324_5033460f-1cb0-4155-9af6-9305dc744bf0 became leader
	I1205 19:53:52.303160       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-867324_5033460f-1cb0-4155-9af6-9305dc744bf0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-867324 -n ingress-addon-legacy-867324
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-867324 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (175.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- exec busybox-5bc68d56bd-ctbfn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- exec busybox-5bc68d56bd-ctbfn -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-930892 -- exec busybox-5bc68d56bd-ctbfn -- sh -c "ping -c 1 192.168.58.1": exit status 1 (243.130111ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-ctbfn): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- exec busybox-5bc68d56bd-gg5q2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- exec busybox-5bc68d56bd-gg5q2 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-930892 -- exec busybox-5bc68d56bd-gg5q2 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (234.603453ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-gg5q2): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-930892
helpers_test.go:235: (dbg) docker inspect multinode-930892:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5e6ffca9b1cdea5ea6e00c49ce2f376d8a49697a136a6f3830a6acb7f8f8841",
	        "Created": "2023-12-05T20:03:36.180784885Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 73246,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-05T20:03:36.510716775Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e4e0f3cc6f04c458835e9edb05d52f031520d40521bc3568d81cbb7c06a79ef2",
	        "ResolvConfPath": "/var/lib/docker/containers/d5e6ffca9b1cdea5ea6e00c49ce2f376d8a49697a136a6f3830a6acb7f8f8841/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5e6ffca9b1cdea5ea6e00c49ce2f376d8a49697a136a6f3830a6acb7f8f8841/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5e6ffca9b1cdea5ea6e00c49ce2f376d8a49697a136a6f3830a6acb7f8f8841/hosts",
	        "LogPath": "/var/lib/docker/containers/d5e6ffca9b1cdea5ea6e00c49ce2f376d8a49697a136a6f3830a6acb7f8f8841/d5e6ffca9b1cdea5ea6e00c49ce2f376d8a49697a136a6f3830a6acb7f8f8841-json.log",
	        "Name": "/multinode-930892",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-930892:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-930892",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/652f8ae415792ce8adc22530dede1f17e5f46ead1514ad5520ea713fe6a3da6f-init/diff:/var/lib/docker/overlay2/ad36f68c22d2503e0656ab5d87c276f08a38342a08463cd6653b41bc4f40eea5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/652f8ae415792ce8adc22530dede1f17e5f46ead1514ad5520ea713fe6a3da6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/652f8ae415792ce8adc22530dede1f17e5f46ead1514ad5520ea713fe6a3da6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/652f8ae415792ce8adc22530dede1f17e5f46ead1514ad5520ea713fe6a3da6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-930892",
	                "Source": "/var/lib/docker/volumes/multinode-930892/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-930892",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-930892",
	                "name.minikube.sigs.k8s.io": "multinode-930892",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5798e2eaacec3b1242a0931e3455d76590873cfee3b26b028764ceb0db3107e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d5798e2eaace",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-930892": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d5e6ffca9b1c",
	                        "multinode-930892"
	                    ],
	                    "NetworkID": "f407d22902b57124b891e41ee172f06a92ddb5228c9018f0234b17182c59d6af",
	                    "EndpointID": "a7f3d00d86aa1e95fba92a9f17bdd557d4beff2eaa49b23aaf753e30ed40bdb1",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-930892 -n multinode-930892
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-930892 logs -n 25: (1.502691203s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-344042                           | mount-start-2-344042 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-344042 ssh -- ls                    | mount-start-2-344042 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-342253                           | mount-start-1-342253 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-344042 ssh -- ls                    | mount-start-2-344042 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-344042                           | mount-start-2-344042 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	| start   | -p mount-start-2-344042                           | mount-start-2-344042 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	| ssh     | mount-start-2-344042 ssh -- ls                    | mount-start-2-344042 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-344042                           | mount-start-2-344042 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	| delete  | -p mount-start-1-342253                           | mount-start-1-342253 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	| start   | -p multinode-930892                               | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:05 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- apply -f                   | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- rollout                    | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- get pods -o                | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- get pods -o                | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- exec                       | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-ctbfn --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- exec                       | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-gg5q2 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- exec                       | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-ctbfn --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- exec                       | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-gg5q2 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- exec                       | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-ctbfn -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- exec                       | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-gg5q2 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- get pods -o                | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- exec                       | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-ctbfn                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- exec                       | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC |                     |
	|         | busybox-5bc68d56bd-ctbfn -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- exec                       | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-gg5q2                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-930892 -- exec                       | multinode-930892     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC |                     |
	|         | busybox-5bc68d56bd-gg5q2 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 20:03:30
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:03:30.810087   72782 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:03:30.810292   72782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:03:30.810319   72782 out.go:309] Setting ErrFile to fd 2...
	I1205 20:03:30.810339   72782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:03:30.810618   72782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	I1205 20:03:30.811075   72782 out.go:303] Setting JSON to false
	I1205 20:03:30.812234   72782 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2757,"bootTime":1701803854,"procs":466,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 20:03:30.812335   72782 start.go:138] virtualization:  
	I1205 20:03:30.814726   72782 out.go:177] * [multinode-930892] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1205 20:03:30.817619   72782 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:03:30.819404   72782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:03:30.817720   72782 notify.go:220] Checking for updates...
	I1205 20:03:30.823131   72782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 20:03:30.825266   72782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 20:03:30.827212   72782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 20:03:30.829184   72782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:03:30.831109   72782 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:03:30.854195   72782 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 20:03:30.854307   72782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:03:30.933895   72782 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-05 20:03:30.923948499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 20:03:30.933998   72782 docker.go:295] overlay module found
	I1205 20:03:30.936128   72782 out.go:177] * Using the docker driver based on user configuration
	I1205 20:03:30.937728   72782 start.go:298] selected driver: docker
	I1205 20:03:30.937742   72782 start.go:902] validating driver "docker" against <nil>
	I1205 20:03:30.937756   72782 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:03:30.938386   72782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:03:31.007514   72782 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-05 20:03:30.998052678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 20:03:31.007697   72782 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 20:03:31.007983   72782 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:03:31.010135   72782 out.go:177] * Using Docker driver with root privileges
	I1205 20:03:31.012005   72782 cni.go:84] Creating CNI manager for ""
	I1205 20:03:31.012024   72782 cni.go:136] 0 nodes found, recommending kindnet
	I1205 20:03:31.012033   72782 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 20:03:31.012047   72782 start_flags.go:323] config:
	{Name:multinode-930892 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-930892 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:03:31.014115   72782 out.go:177] * Starting control plane node multinode-930892 in cluster multinode-930892
	I1205 20:03:31.015917   72782 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 20:03:31.017790   72782 out.go:177] * Pulling base image ...
	I1205 20:03:31.019541   72782 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:03:31.019562   72782 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 20:03:31.019583   72782 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1205 20:03:31.019608   72782 cache.go:56] Caching tarball of preloaded images
	I1205 20:03:31.019683   72782 preload.go:174] Found /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1205 20:03:31.019692   72782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:03:31.020123   72782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/config.json ...
	I1205 20:03:31.020154   72782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/config.json: {Name:mk92a171c5b4a206c346a53fd4f57b2315d51118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:31.036655   72782 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon, skipping pull
	I1205 20:03:31.036678   72782 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in daemon, skipping load
	I1205 20:03:31.036697   72782 cache.go:194] Successfully downloaded all kic artifacts
	I1205 20:03:31.036756   72782 start.go:365] acquiring machines lock for multinode-930892: {Name:mkf99bed44570876a55f62eb35c3f708fa93dcf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:03:31.036860   72782 start.go:369] acquired machines lock for "multinode-930892" in 86.68µs
	I1205 20:03:31.036885   72782 start.go:93] Provisioning new machine with config: &{Name:multinode-930892 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-930892 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:03:31.036960   72782 start.go:125] createHost starting for "" (driver="docker")
	I1205 20:03:31.039239   72782 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1205 20:03:31.039487   72782 start.go:159] libmachine.API.Create for "multinode-930892" (driver="docker")
	I1205 20:03:31.039533   72782 client.go:168] LocalClient.Create starting
	I1205 20:03:31.039588   72782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem
	I1205 20:03:31.039625   72782 main.go:141] libmachine: Decoding PEM data...
	I1205 20:03:31.039643   72782 main.go:141] libmachine: Parsing certificate...
	I1205 20:03:31.039704   72782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem
	I1205 20:03:31.039724   72782 main.go:141] libmachine: Decoding PEM data...
	I1205 20:03:31.039735   72782 main.go:141] libmachine: Parsing certificate...
	I1205 20:03:31.040109   72782 cli_runner.go:164] Run: docker network inspect multinode-930892 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 20:03:31.056610   72782 cli_runner.go:211] docker network inspect multinode-930892 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 20:03:31.056690   72782 network_create.go:281] running [docker network inspect multinode-930892] to gather additional debugging logs...
	I1205 20:03:31.056710   72782 cli_runner.go:164] Run: docker network inspect multinode-930892
	W1205 20:03:31.073241   72782 cli_runner.go:211] docker network inspect multinode-930892 returned with exit code 1
	I1205 20:03:31.073270   72782 network_create.go:284] error running [docker network inspect multinode-930892]: docker network inspect multinode-930892: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-930892 not found
	I1205 20:03:31.073283   72782 network_create.go:286] output of [docker network inspect multinode-930892]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-930892 not found
	
	** /stderr **
	I1205 20:03:31.073382   72782 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 20:03:31.090614   72782 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b6ed01875673 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:6c:57:c2:6c} reservation:<nil>}
	I1205 20:03:31.090940   72782 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024dbd60}
	I1205 20:03:31.090963   72782 network_create.go:124] attempt to create docker network multinode-930892 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1205 20:03:31.091025   72782 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-930892 multinode-930892
	I1205 20:03:31.157757   72782 network_create.go:108] docker network multinode-930892 192.168.58.0/24 created
	I1205 20:03:31.157800   72782 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-930892" container
	I1205 20:03:31.157872   72782 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 20:03:31.174483   72782 cli_runner.go:164] Run: docker volume create multinode-930892 --label name.minikube.sigs.k8s.io=multinode-930892 --label created_by.minikube.sigs.k8s.io=true
	I1205 20:03:31.192722   72782 oci.go:103] Successfully created a docker volume multinode-930892
	I1205 20:03:31.192815   72782 cli_runner.go:164] Run: docker run --rm --name multinode-930892-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-930892 --entrypoint /usr/bin/test -v multinode-930892:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib
	I1205 20:03:31.758005   72782 oci.go:107] Successfully prepared a docker volume multinode-930892
	I1205 20:03:31.758055   72782 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:03:31.758077   72782 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 20:03:31.758159   72782 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-930892:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 20:03:36.094644   72782 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-930892:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir: (4.336444915s)
	I1205 20:03:36.094674   72782 kic.go:203] duration metric: took 4.336595 seconds to extract preloaded images to volume
	W1205 20:03:36.094825   72782 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 20:03:36.094936   72782 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 20:03:36.165643   72782 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-930892 --name multinode-930892 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-930892 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-930892 --network multinode-930892 --ip 192.168.58.2 --volume multinode-930892:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 20:03:36.520639   72782 cli_runner.go:164] Run: docker container inspect multinode-930892 --format={{.State.Running}}
	I1205 20:03:36.551604   72782 cli_runner.go:164] Run: docker container inspect multinode-930892 --format={{.State.Status}}
	I1205 20:03:36.577296   72782 cli_runner.go:164] Run: docker exec multinode-930892 stat /var/lib/dpkg/alternatives/iptables
	I1205 20:03:36.660222   72782 oci.go:144] the created container "multinode-930892" has a running status.
	I1205 20:03:36.660247   72782 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892/id_rsa...
	I1205 20:03:37.031926   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1205 20:03:37.032137   72782 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 20:03:37.061299   72782 cli_runner.go:164] Run: docker container inspect multinode-930892 --format={{.State.Status}}
	I1205 20:03:37.089771   72782 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 20:03:37.089790   72782 kic_runner.go:114] Args: [docker exec --privileged multinode-930892 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 20:03:37.178279   72782 cli_runner.go:164] Run: docker container inspect multinode-930892 --format={{.State.Status}}
	I1205 20:03:37.209080   72782 machine.go:88] provisioning docker machine ...
	I1205 20:03:37.209107   72782 ubuntu.go:169] provisioning hostname "multinode-930892"
	I1205 20:03:37.209179   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892
	I1205 20:03:37.241104   72782 main.go:141] libmachine: Using SSH client type: native
	I1205 20:03:37.241553   72782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1205 20:03:37.241567   72782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-930892 && echo "multinode-930892" | sudo tee /etc/hostname
	I1205 20:03:37.453647   72782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-930892
	
	I1205 20:03:37.453731   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892
	I1205 20:03:37.475053   72782 main.go:141] libmachine: Using SSH client type: native
	I1205 20:03:37.475462   72782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1205 20:03:37.475487   72782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-930892' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-930892/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-930892' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:03:37.632925   72782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:03:37.632956   72782 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-2478/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-2478/.minikube}
	I1205 20:03:37.632979   72782 ubuntu.go:177] setting up certificates
	I1205 20:03:37.632993   72782 provision.go:83] configureAuth start
	I1205 20:03:37.633062   72782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-930892
	I1205 20:03:37.656317   72782 provision.go:138] copyHostCerts
	I1205 20:03:37.656355   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem
	I1205 20:03:37.656384   72782 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem, removing ...
	I1205 20:03:37.656390   72782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem
	I1205 20:03:37.656450   72782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem (1078 bytes)
	I1205 20:03:37.656527   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem
	I1205 20:03:37.656545   72782 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem, removing ...
	I1205 20:03:37.656549   72782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem
	I1205 20:03:37.656575   72782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem (1123 bytes)
	I1205 20:03:37.656661   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem
	I1205 20:03:37.656683   72782 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem, removing ...
	I1205 20:03:37.656687   72782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem
	I1205 20:03:37.656717   72782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem (1679 bytes)
	I1205 20:03:37.656766   72782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem org=jenkins.multinode-930892 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-930892]
	I1205 20:03:38.329888   72782 provision.go:172] copyRemoteCerts
	I1205 20:03:38.329988   72782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:03:38.330043   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892
	I1205 20:03:38.347373   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892/id_rsa Username:docker}
	I1205 20:03:38.450142   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:03:38.450215   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:03:38.477579   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:03:38.477636   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:03:38.504538   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:03:38.504600   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 20:03:38.532089   72782 provision.go:86] duration metric: configureAuth took 899.075242ms
	I1205 20:03:38.532156   72782 ubuntu.go:193] setting minikube options for container-runtime
	I1205 20:03:38.532382   72782 config.go:182] Loaded profile config "multinode-930892": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:03:38.532519   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892
	I1205 20:03:38.551215   72782 main.go:141] libmachine: Using SSH client type: native
	I1205 20:03:38.551665   72782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1205 20:03:38.551691   72782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:03:38.809134   72782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:03:38.809157   72782 machine.go:91] provisioned docker machine in 1.600059535s
	I1205 20:03:38.809166   72782 client.go:171] LocalClient.Create took 7.76962759s
	I1205 20:03:38.809180   72782 start.go:167] duration metric: libmachine.API.Create for "multinode-930892" took 7.769692059s
	I1205 20:03:38.809187   72782 start.go:300] post-start starting for "multinode-930892" (driver="docker")
	I1205 20:03:38.809198   72782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:03:38.809269   72782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:03:38.809318   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892
	I1205 20:03:38.831222   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892/id_rsa Username:docker}
	I1205 20:03:38.934414   72782 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:03:38.938106   72782 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1205 20:03:38.938123   72782 command_runner.go:130] > NAME="Ubuntu"
	I1205 20:03:38.938130   72782 command_runner.go:130] > VERSION_ID="22.04"
	I1205 20:03:38.938137   72782 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1205 20:03:38.938148   72782 command_runner.go:130] > VERSION_CODENAME=jammy
	I1205 20:03:38.938153   72782 command_runner.go:130] > ID=ubuntu
	I1205 20:03:38.938157   72782 command_runner.go:130] > ID_LIKE=debian
	I1205 20:03:38.938163   72782 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1205 20:03:38.938169   72782 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1205 20:03:38.938177   72782 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1205 20:03:38.938186   72782 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1205 20:03:38.938191   72782 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1205 20:03:38.938491   72782 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 20:03:38.938525   72782 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 20:03:38.938536   72782 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 20:03:38.938547   72782 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1205 20:03:38.938559   72782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/addons for local assets ...
	I1205 20:03:38.938618   72782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/files for local assets ...
	I1205 20:03:38.938702   72782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> 77732.pem in /etc/ssl/certs
	I1205 20:03:38.938718   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> /etc/ssl/certs/77732.pem
	I1205 20:03:38.938822   72782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:03:38.948519   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem --> /etc/ssl/certs/77732.pem (1708 bytes)
	I1205 20:03:38.976352   72782 start.go:303] post-start completed in 167.14914ms
	I1205 20:03:38.976711   72782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-930892
	I1205 20:03:38.993384   72782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/config.json ...
	I1205 20:03:38.993640   72782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:03:38.993686   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892
	I1205 20:03:39.010933   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892/id_rsa Username:docker}
	I1205 20:03:39.113306   72782 command_runner.go:130] > 14%!
	(MISSING)I1205 20:03:39.113871   72782 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 20:03:39.119391   72782 command_runner.go:130] > 168G
	I1205 20:03:39.119922   72782 start.go:128] duration metric: createHost completed in 8.082949903s
	I1205 20:03:39.119942   72782 start.go:83] releasing machines lock for "multinode-930892", held for 8.083073251s
	I1205 20:03:39.120008   72782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-930892
	I1205 20:03:39.137251   72782 ssh_runner.go:195] Run: cat /version.json
	I1205 20:03:39.137305   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892
	I1205 20:03:39.137532   72782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:03:39.137565   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892
	I1205 20:03:39.157849   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892/id_rsa Username:docker}
	I1205 20:03:39.165638   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892/id_rsa Username:docker}
	I1205 20:03:39.255993   72782 command_runner.go:130] > {"iso_version": "v1.32.1-1701107474-17206", "kicbase_version": "v0.0.42-1701387262-17703", "minikube_version": "v1.32.0", "commit": "196015715c4eb12e436d5bb69e555ba604cda88e"}
	I1205 20:03:39.256379   72782 ssh_runner.go:195] Run: systemctl --version
	I1205 20:03:39.386852   72782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 20:03:39.389790   72782 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1205 20:03:39.389833   72782 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1205 20:03:39.389899   72782 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:03:39.533836   72782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:03:39.538921   72782 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1205 20:03:39.538943   72782 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1205 20:03:39.538951   72782 command_runner.go:130] > Device: 3ah/58d	Inode: 1088822     Links: 1
	I1205 20:03:39.538958   72782 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:03:39.538966   72782 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1205 20:03:39.538972   72782 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1205 20:03:39.538978   72782 command_runner.go:130] > Change: 2023-12-05 19:35:52.969728843 +0000
	I1205 20:03:39.538984   72782 command_runner.go:130] >  Birth: 2023-12-05 19:35:52.969728843 +0000
	I1205 20:03:39.539272   72782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:03:39.564068   72782 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 20:03:39.564145   72782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:03:39.601916   72782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1205 20:03:39.601951   72782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 20:03:39.601959   72782 start.go:475] detecting cgroup driver to use...
	I1205 20:03:39.601987   72782 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 20:03:39.602035   72782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:03:39.619507   72782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:03:39.633120   72782 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:03:39.633188   72782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:03:39.648704   72782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:03:39.665096   72782 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:03:39.752025   72782 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:03:39.847710   72782 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1205 20:03:39.847736   72782 docker.go:219] disabling docker service ...
	I1205 20:03:39.847831   72782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:03:39.869305   72782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:03:39.882579   72782 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:03:39.972701   72782 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1205 20:03:39.972771   72782 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:03:40.072940   72782 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1205 20:03:40.073536   72782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:03:40.087887   72782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:03:40.106494   72782 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 20:03:40.107837   72782 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:03:40.107902   72782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:03:40.119141   72782 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:03:40.119236   72782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:03:40.130545   72782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:03:40.141919   72782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:03:40.153861   72782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:03:40.165039   72782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:03:40.174228   72782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 20:03:40.175166   72782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:03:40.184953   72782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:03:40.274119   72782 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:03:40.400720   72782 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:03:40.400835   72782 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:03:40.405388   72782 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 20:03:40.405447   72782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 20:03:40.405478   72782 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I1205 20:03:40.405505   72782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:03:40.405535   72782 command_runner.go:130] > Access: 2023-12-05 20:03:40.384311868 +0000
	I1205 20:03:40.405573   72782 command_runner.go:130] > Modify: 2023-12-05 20:03:40.384311868 +0000
	I1205 20:03:40.405594   72782 command_runner.go:130] > Change: 2023-12-05 20:03:40.384311868 +0000
	I1205 20:03:40.405622   72782 command_runner.go:130] >  Birth: -
	I1205 20:03:40.405650   72782 start.go:543] Will wait 60s for crictl version
	I1205 20:03:40.405726   72782 ssh_runner.go:195] Run: which crictl
	I1205 20:03:40.409777   72782 command_runner.go:130] > /usr/bin/crictl
	I1205 20:03:40.409887   72782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:03:40.452335   72782 command_runner.go:130] > Version:  0.1.0
	I1205 20:03:40.452401   72782 command_runner.go:130] > RuntimeName:  cri-o
	I1205 20:03:40.452421   72782 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1205 20:03:40.452440   72782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 20:03:40.454901   72782 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 20:03:40.455030   72782 ssh_runner.go:195] Run: crio --version
	I1205 20:03:40.494750   72782 command_runner.go:130] > crio version 1.24.6
	I1205 20:03:40.494814   72782 command_runner.go:130] > Version:          1.24.6
	I1205 20:03:40.494837   72782 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1205 20:03:40.494857   72782 command_runner.go:130] > GitTreeState:     clean
	I1205 20:03:40.494886   72782 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1205 20:03:40.494912   72782 command_runner.go:130] > GoVersion:        go1.18.2
	I1205 20:03:40.494930   72782 command_runner.go:130] > Compiler:         gc
	I1205 20:03:40.494970   72782 command_runner.go:130] > Platform:         linux/arm64
	I1205 20:03:40.494993   72782 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:03:40.495015   72782 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:03:40.495047   72782 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:03:40.495070   72782 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:03:40.497790   72782 ssh_runner.go:195] Run: crio --version
	I1205 20:03:40.538154   72782 command_runner.go:130] > crio version 1.24.6
	I1205 20:03:40.538179   72782 command_runner.go:130] > Version:          1.24.6
	I1205 20:03:40.538187   72782 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1205 20:03:40.538193   72782 command_runner.go:130] > GitTreeState:     clean
	I1205 20:03:40.538220   72782 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1205 20:03:40.538230   72782 command_runner.go:130] > GoVersion:        go1.18.2
	I1205 20:03:40.538236   72782 command_runner.go:130] > Compiler:         gc
	I1205 20:03:40.538241   72782 command_runner.go:130] > Platform:         linux/arm64
	I1205 20:03:40.538250   72782 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:03:40.538259   72782 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:03:40.538268   72782 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:03:40.538286   72782 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:03:40.544279   72782 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1205 20:03:40.546351   72782 cli_runner.go:164] Run: docker network inspect multinode-930892 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 20:03:40.563213   72782 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1205 20:03:40.567476   72782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:03:40.580077   72782 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:03:40.580149   72782 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:03:40.643726   72782 command_runner.go:130] > {
	I1205 20:03:40.643745   72782 command_runner.go:130] >   "images": [
	I1205 20:03:40.643750   72782 command_runner.go:130] >     {
	I1205 20:03:40.643816   72782 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1205 20:03:40.643823   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.643831   72782 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1205 20:03:40.643836   72782 command_runner.go:130] >       ],
	I1205 20:03:40.643841   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.643851   72782 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1205 20:03:40.643864   72782 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1205 20:03:40.643869   72782 command_runner.go:130] >       ],
	I1205 20:03:40.643875   72782 command_runner.go:130] >       "size": "60867618",
	I1205 20:03:40.643880   72782 command_runner.go:130] >       "uid": null,
	I1205 20:03:40.643887   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.643896   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.643901   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.643905   72782 command_runner.go:130] >     },
	I1205 20:03:40.643910   72782 command_runner.go:130] >     {
	I1205 20:03:40.643917   72782 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1205 20:03:40.643922   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.643929   72782 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 20:03:40.643934   72782 command_runner.go:130] >       ],
	I1205 20:03:40.643939   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.643948   72782 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1205 20:03:40.643958   72782 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1205 20:03:40.643963   72782 command_runner.go:130] >       ],
	I1205 20:03:40.643972   72782 command_runner.go:130] >       "size": "29037500",
	I1205 20:03:40.643979   72782 command_runner.go:130] >       "uid": null,
	I1205 20:03:40.643985   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.643990   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.643994   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.643999   72782 command_runner.go:130] >     },
	I1205 20:03:40.644003   72782 command_runner.go:130] >     {
	I1205 20:03:40.644012   72782 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1205 20:03:40.644017   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.644024   72782 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1205 20:03:40.644028   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644033   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.644042   72782 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1205 20:03:40.644052   72782 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1205 20:03:40.644056   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644061   72782 command_runner.go:130] >       "size": "51393451",
	I1205 20:03:40.644066   72782 command_runner.go:130] >       "uid": null,
	I1205 20:03:40.644071   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.644076   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.644084   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.644088   72782 command_runner.go:130] >     },
	I1205 20:03:40.644093   72782 command_runner.go:130] >     {
	I1205 20:03:40.644100   72782 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1205 20:03:40.644105   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.644111   72782 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1205 20:03:40.644115   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644120   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.644129   72782 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1205 20:03:40.644138   72782 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1205 20:03:40.644145   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644150   72782 command_runner.go:130] >       "size": "182203183",
	I1205 20:03:40.644155   72782 command_runner.go:130] >       "uid": {
	I1205 20:03:40.644160   72782 command_runner.go:130] >         "value": "0"
	I1205 20:03:40.644164   72782 command_runner.go:130] >       },
	I1205 20:03:40.644169   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.644174   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.644179   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.644183   72782 command_runner.go:130] >     },
	I1205 20:03:40.644188   72782 command_runner.go:130] >     {
	I1205 20:03:40.644195   72782 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I1205 20:03:40.644200   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.644206   72782 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1205 20:03:40.644211   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644216   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.644225   72782 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I1205 20:03:40.644234   72782 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I1205 20:03:40.644239   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644244   72782 command_runner.go:130] >       "size": "121119694",
	I1205 20:03:40.644249   72782 command_runner.go:130] >       "uid": {
	I1205 20:03:40.644254   72782 command_runner.go:130] >         "value": "0"
	I1205 20:03:40.644258   72782 command_runner.go:130] >       },
	I1205 20:03:40.644263   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.644268   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.644273   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.644277   72782 command_runner.go:130] >     },
	I1205 20:03:40.644282   72782 command_runner.go:130] >     {
	I1205 20:03:40.644290   72782 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I1205 20:03:40.644295   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.644302   72782 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1205 20:03:40.644306   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644311   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.644321   72782 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1205 20:03:40.644330   72782 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I1205 20:03:40.644334   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644341   72782 command_runner.go:130] >       "size": "117252916",
	I1205 20:03:40.644345   72782 command_runner.go:130] >       "uid": {
	I1205 20:03:40.644351   72782 command_runner.go:130] >         "value": "0"
	I1205 20:03:40.644355   72782 command_runner.go:130] >       },
	I1205 20:03:40.644360   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.644365   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.644370   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.644374   72782 command_runner.go:130] >     },
	I1205 20:03:40.644378   72782 command_runner.go:130] >     {
	I1205 20:03:40.644387   72782 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I1205 20:03:40.644392   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.644398   72782 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1205 20:03:40.644402   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644407   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.644416   72782 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I1205 20:03:40.644425   72782 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1205 20:03:40.644430   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644435   72782 command_runner.go:130] >       "size": "69992343",
	I1205 20:03:40.644439   72782 command_runner.go:130] >       "uid": null,
	I1205 20:03:40.644444   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.644449   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.644454   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.644458   72782 command_runner.go:130] >     },
	I1205 20:03:40.644462   72782 command_runner.go:130] >     {
	I1205 20:03:40.644470   72782 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I1205 20:03:40.644475   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.644481   72782 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1205 20:03:40.644486   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644491   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.644517   72782 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1205 20:03:40.644527   72782 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I1205 20:03:40.644532   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644538   72782 command_runner.go:130] >       "size": "59253556",
	I1205 20:03:40.644543   72782 command_runner.go:130] >       "uid": {
	I1205 20:03:40.644548   72782 command_runner.go:130] >         "value": "0"
	I1205 20:03:40.644552   72782 command_runner.go:130] >       },
	I1205 20:03:40.644557   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.644562   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.644567   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.644571   72782 command_runner.go:130] >     },
	I1205 20:03:40.644575   72782 command_runner.go:130] >     {
	I1205 20:03:40.644582   72782 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1205 20:03:40.644587   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.644593   72782 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1205 20:03:40.644598   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644603   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.644612   72782 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1205 20:03:40.644622   72782 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1205 20:03:40.644627   72782 command_runner.go:130] >       ],
	I1205 20:03:40.644632   72782 command_runner.go:130] >       "size": "520014",
	I1205 20:03:40.644637   72782 command_runner.go:130] >       "uid": {
	I1205 20:03:40.644642   72782 command_runner.go:130] >         "value": "65535"
	I1205 20:03:40.644646   72782 command_runner.go:130] >       },
	I1205 20:03:40.644651   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.644656   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.644661   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.644666   72782 command_runner.go:130] >     }
	I1205 20:03:40.644670   72782 command_runner.go:130] >   ]
	I1205 20:03:40.644674   72782 command_runner.go:130] > }
	I1205 20:03:40.647349   72782 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:03:40.647370   72782 crio.go:415] Images already preloaded, skipping extraction
	I1205 20:03:40.647422   72782 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:03:40.684506   72782 command_runner.go:130] > {
	I1205 20:03:40.684524   72782 command_runner.go:130] >   "images": [
	I1205 20:03:40.684530   72782 command_runner.go:130] >     {
	I1205 20:03:40.684539   72782 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1205 20:03:40.684545   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.684553   72782 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1205 20:03:40.684557   72782 command_runner.go:130] >       ],
	I1205 20:03:40.684562   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.684572   72782 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1205 20:03:40.684582   72782 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1205 20:03:40.684587   72782 command_runner.go:130] >       ],
	I1205 20:03:40.684592   72782 command_runner.go:130] >       "size": "60867618",
	I1205 20:03:40.684597   72782 command_runner.go:130] >       "uid": null,
	I1205 20:03:40.684607   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.684614   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.684619   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.684624   72782 command_runner.go:130] >     },
	I1205 20:03:40.684628   72782 command_runner.go:130] >     {
	I1205 20:03:40.684636   72782 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1205 20:03:40.684641   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.684648   72782 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 20:03:40.684652   72782 command_runner.go:130] >       ],
	I1205 20:03:40.684658   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.684667   72782 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1205 20:03:40.684677   72782 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1205 20:03:40.684682   72782 command_runner.go:130] >       ],
	I1205 20:03:40.684688   72782 command_runner.go:130] >       "size": "29037500",
	I1205 20:03:40.684693   72782 command_runner.go:130] >       "uid": null,
	I1205 20:03:40.684698   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.684703   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.684708   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.684712   72782 command_runner.go:130] >     },
	I1205 20:03:40.684717   72782 command_runner.go:130] >     {
	I1205 20:03:40.684724   72782 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1205 20:03:40.684729   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.684736   72782 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1205 20:03:40.684741   72782 command_runner.go:130] >       ],
	I1205 20:03:40.684746   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.684755   72782 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1205 20:03:40.684764   72782 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1205 20:03:40.684769   72782 command_runner.go:130] >       ],
	I1205 20:03:40.684776   72782 command_runner.go:130] >       "size": "51393451",
	I1205 20:03:40.684781   72782 command_runner.go:130] >       "uid": null,
	I1205 20:03:40.684787   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.684791   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.684801   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.684805   72782 command_runner.go:130] >     },
	I1205 20:03:40.684809   72782 command_runner.go:130] >     {
	I1205 20:03:40.684816   72782 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1205 20:03:40.684822   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.684828   72782 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1205 20:03:40.684832   72782 command_runner.go:130] >       ],
	I1205 20:03:40.684837   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.684846   72782 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1205 20:03:40.684855   72782 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1205 20:03:40.684862   72782 command_runner.go:130] >       ],
	I1205 20:03:40.684868   72782 command_runner.go:130] >       "size": "182203183",
	I1205 20:03:40.684872   72782 command_runner.go:130] >       "uid": {
	I1205 20:03:40.684877   72782 command_runner.go:130] >         "value": "0"
	I1205 20:03:40.684881   72782 command_runner.go:130] >       },
	I1205 20:03:40.684886   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.684892   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.684897   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.684901   72782 command_runner.go:130] >     },
	I1205 20:03:40.684905   72782 command_runner.go:130] >     {
	I1205 20:03:40.684913   72782 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I1205 20:03:40.684918   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.684924   72782 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1205 20:03:40.684929   72782 command_runner.go:130] >       ],
	I1205 20:03:40.684934   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.684943   72782 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I1205 20:03:40.684952   72782 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I1205 20:03:40.684957   72782 command_runner.go:130] >       ],
	I1205 20:03:40.684962   72782 command_runner.go:130] >       "size": "121119694",
	I1205 20:03:40.684967   72782 command_runner.go:130] >       "uid": {
	I1205 20:03:40.684972   72782 command_runner.go:130] >         "value": "0"
	I1205 20:03:40.684976   72782 command_runner.go:130] >       },
	I1205 20:03:40.684982   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.684987   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.684992   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.684996   72782 command_runner.go:130] >     },
	I1205 20:03:40.685000   72782 command_runner.go:130] >     {
	I1205 20:03:40.685008   72782 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I1205 20:03:40.685013   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.685020   72782 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1205 20:03:40.685030   72782 command_runner.go:130] >       ],
	I1205 20:03:40.685035   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.685045   72782 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1205 20:03:40.685054   72782 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I1205 20:03:40.685059   72782 command_runner.go:130] >       ],
	I1205 20:03:40.685066   72782 command_runner.go:130] >       "size": "117252916",
	I1205 20:03:40.685071   72782 command_runner.go:130] >       "uid": {
	I1205 20:03:40.685075   72782 command_runner.go:130] >         "value": "0"
	I1205 20:03:40.685080   72782 command_runner.go:130] >       },
	I1205 20:03:40.685085   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.685089   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.685094   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.685099   72782 command_runner.go:130] >     },
	I1205 20:03:40.685103   72782 command_runner.go:130] >     {
	I1205 20:03:40.685112   72782 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I1205 20:03:40.685116   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.685123   72782 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1205 20:03:40.685127   72782 command_runner.go:130] >       ],
	I1205 20:03:40.685133   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.685142   72782 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I1205 20:03:40.685152   72782 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1205 20:03:40.685156   72782 command_runner.go:130] >       ],
	I1205 20:03:40.685161   72782 command_runner.go:130] >       "size": "69992343",
	I1205 20:03:40.685166   72782 command_runner.go:130] >       "uid": null,
	I1205 20:03:40.685171   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.685175   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.685180   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.685184   72782 command_runner.go:130] >     },
	I1205 20:03:40.685189   72782 command_runner.go:130] >     {
	I1205 20:03:40.685196   72782 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I1205 20:03:40.685201   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.685207   72782 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1205 20:03:40.685211   72782 command_runner.go:130] >       ],
	I1205 20:03:40.685216   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.685249   72782 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1205 20:03:40.685259   72782 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I1205 20:03:40.685264   72782 command_runner.go:130] >       ],
	I1205 20:03:40.685269   72782 command_runner.go:130] >       "size": "59253556",
	I1205 20:03:40.685274   72782 command_runner.go:130] >       "uid": {
	I1205 20:03:40.685279   72782 command_runner.go:130] >         "value": "0"
	I1205 20:03:40.685283   72782 command_runner.go:130] >       },
	I1205 20:03:40.685288   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.685293   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.685298   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.685302   72782 command_runner.go:130] >     },
	I1205 20:03:40.685306   72782 command_runner.go:130] >     {
	I1205 20:03:40.685314   72782 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1205 20:03:40.685318   72782 command_runner.go:130] >       "repoTags": [
	I1205 20:03:40.685324   72782 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1205 20:03:40.685328   72782 command_runner.go:130] >       ],
	I1205 20:03:40.685333   72782 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:40.685342   72782 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1205 20:03:40.685352   72782 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1205 20:03:40.685356   72782 command_runner.go:130] >       ],
	I1205 20:03:40.685361   72782 command_runner.go:130] >       "size": "520014",
	I1205 20:03:40.685366   72782 command_runner.go:130] >       "uid": {
	I1205 20:03:40.685371   72782 command_runner.go:130] >         "value": "65535"
	I1205 20:03:40.685375   72782 command_runner.go:130] >       },
	I1205 20:03:40.685380   72782 command_runner.go:130] >       "username": "",
	I1205 20:03:40.685385   72782 command_runner.go:130] >       "spec": null,
	I1205 20:03:40.685390   72782 command_runner.go:130] >       "pinned": false
	I1205 20:03:40.685394   72782 command_runner.go:130] >     }
	I1205 20:03:40.685399   72782 command_runner.go:130] >   ]
	I1205 20:03:40.685403   72782 command_runner.go:130] > }
	I1205 20:03:40.688394   72782 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:03:40.688410   72782 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:03:40.688495   72782 ssh_runner.go:195] Run: crio config
	I1205 20:03:40.745337   72782 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 20:03:40.745360   72782 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 20:03:40.745372   72782 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 20:03:40.745376   72782 command_runner.go:130] > #
	I1205 20:03:40.745385   72782 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 20:03:40.745392   72782 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 20:03:40.745400   72782 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 20:03:40.745417   72782 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 20:03:40.745425   72782 command_runner.go:130] > # reload'.
	I1205 20:03:40.745433   72782 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 20:03:40.745441   72782 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 20:03:40.745448   72782 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 20:03:40.745455   72782 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 20:03:40.745460   72782 command_runner.go:130] > [crio]
	I1205 20:03:40.745468   72782 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 20:03:40.745474   72782 command_runner.go:130] > # containers images, in this directory.
	I1205 20:03:40.745507   72782 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1205 20:03:40.745515   72782 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 20:03:40.746034   72782 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1205 20:03:40.746054   72782 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 20:03:40.746063   72782 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 20:03:40.746756   72782 command_runner.go:130] > # storage_driver = "vfs"
	I1205 20:03:40.746775   72782 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 20:03:40.746783   72782 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 20:03:40.747099   72782 command_runner.go:130] > # storage_option = [
	I1205 20:03:40.747338   72782 command_runner.go:130] > # ]
	I1205 20:03:40.747360   72782 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 20:03:40.747368   72782 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 20:03:40.747828   72782 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 20:03:40.747844   72782 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 20:03:40.747852   72782 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 20:03:40.747870   72782 command_runner.go:130] > # always happen on a node reboot
	I1205 20:03:40.748523   72782 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 20:03:40.748541   72782 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 20:03:40.748549   72782 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 20:03:40.748569   72782 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 20:03:40.750187   72782 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1205 20:03:40.750206   72782 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 20:03:40.750227   72782 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 20:03:40.750872   72782 command_runner.go:130] > # internal_wipe = true
	I1205 20:03:40.750889   72782 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 20:03:40.750898   72782 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 20:03:40.750918   72782 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 20:03:40.751560   72782 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 20:03:40.751585   72782 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 20:03:40.751600   72782 command_runner.go:130] > [crio.api]
	I1205 20:03:40.751611   72782 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 20:03:40.752259   72782 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 20:03:40.752273   72782 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 20:03:40.752912   72782 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 20:03:40.752929   72782 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 20:03:40.752936   72782 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 20:03:40.753527   72782 command_runner.go:130] > # stream_port = "0"
	I1205 20:03:40.753545   72782 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 20:03:40.754180   72782 command_runner.go:130] > # stream_enable_tls = false
	I1205 20:03:40.754220   72782 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 20:03:40.754633   72782 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 20:03:40.754650   72782 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 20:03:40.754658   72782 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 20:03:40.754663   72782 command_runner.go:130] > # minutes.
	I1205 20:03:40.755132   72782 command_runner.go:130] > # stream_tls_cert = ""
	I1205 20:03:40.755176   72782 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 20:03:40.755199   72782 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 20:03:40.755558   72782 command_runner.go:130] > # stream_tls_key = ""
	I1205 20:03:40.755597   72782 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 20:03:40.755619   72782 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 20:03:40.755644   72782 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 20:03:40.755914   72782 command_runner.go:130] > # stream_tls_ca = ""
	I1205 20:03:40.755957   72782 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:03:40.756471   72782 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1205 20:03:40.756492   72782 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:03:40.757085   72782 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1205 20:03:40.757162   72782 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 20:03:40.757194   72782 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 20:03:40.757229   72782 command_runner.go:130] > [crio.runtime]
	I1205 20:03:40.757257   72782 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 20:03:40.757288   72782 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 20:03:40.757306   72782 command_runner.go:130] > # "nofile=1024:2048"
	I1205 20:03:40.757343   72782 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 20:03:40.757388   72782 command_runner.go:130] > # default_ulimits = [
	I1205 20:03:40.757679   72782 command_runner.go:130] > # ]
	I1205 20:03:40.757741   72782 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 20:03:40.758154   72782 command_runner.go:130] > # no_pivot = false
	I1205 20:03:40.758170   72782 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 20:03:40.758187   72782 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 20:03:40.758770   72782 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 20:03:40.758825   72782 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 20:03:40.758846   72782 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 20:03:40.758867   72782 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:03:40.759254   72782 command_runner.go:130] > # conmon = ""
	I1205 20:03:40.759298   72782 command_runner.go:130] > # Cgroup setting for conmon
	I1205 20:03:40.759320   72782 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 20:03:40.759523   72782 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 20:03:40.759565   72782 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 20:03:40.759577   72782 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 20:03:40.759586   72782 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:03:40.759694   72782 command_runner.go:130] > # conmon_env = [
	I1205 20:03:40.760078   72782 command_runner.go:130] > # ]
	I1205 20:03:40.760109   72782 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 20:03:40.760128   72782 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 20:03:40.760166   72782 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 20:03:40.760307   72782 command_runner.go:130] > # default_env = [
	I1205 20:03:40.760675   72782 command_runner.go:130] > # ]
	I1205 20:03:40.760687   72782 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 20:03:40.761266   72782 command_runner.go:130] > # selinux = false
	I1205 20:03:40.761279   72782 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 20:03:40.761319   72782 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 20:03:40.761330   72782 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 20:03:40.761747   72782 command_runner.go:130] > # seccomp_profile = ""
	I1205 20:03:40.761796   72782 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 20:03:40.761816   72782 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 20:03:40.761869   72782 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 20:03:40.761905   72782 command_runner.go:130] > # which might increase security.
	I1205 20:03:40.762357   72782 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1205 20:03:40.762374   72782 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 20:03:40.762385   72782 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 20:03:40.762394   72782 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 20:03:40.762405   72782 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 20:03:40.762414   72782 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:03:40.763045   72782 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 20:03:40.763061   72782 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 20:03:40.763067   72782 command_runner.go:130] > # the cgroup blockio controller.
	I1205 20:03:40.763545   72782 command_runner.go:130] > # blockio_config_file = ""
	I1205 20:03:40.763595   72782 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 20:03:40.763615   72782 command_runner.go:130] > # irqbalance daemon.
	I1205 20:03:40.764134   72782 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 20:03:40.764147   72782 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 20:03:40.764154   72782 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:03:40.764585   72782 command_runner.go:130] > # rdt_config_file = ""
	I1205 20:03:40.764629   72782 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 20:03:40.764980   72782 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 20:03:40.765025   72782 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 20:03:40.765366   72782 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 20:03:40.765410   72782 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 20:03:40.765432   72782 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 20:03:40.765470   72782 command_runner.go:130] > # will be added.
	I1205 20:03:40.765576   72782 command_runner.go:130] > # default_capabilities = [
	I1205 20:03:40.765976   72782 command_runner.go:130] > # 	"CHOWN",
	I1205 20:03:40.766265   72782 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 20:03:40.766548   72782 command_runner.go:130] > # 	"FSETID",
	I1205 20:03:40.766869   72782 command_runner.go:130] > # 	"FOWNER",
	I1205 20:03:40.767249   72782 command_runner.go:130] > # 	"SETGID",
	I1205 20:03:40.767279   72782 command_runner.go:130] > # 	"SETUID",
	I1205 20:03:40.767299   72782 command_runner.go:130] > # 	"SETPCAP",
	I1205 20:03:40.767431   72782 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 20:03:40.767460   72782 command_runner.go:130] > # 	"KILL",
	I1205 20:03:40.767621   72782 command_runner.go:130] > # ]
	I1205 20:03:40.767654   72782 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 20:03:40.767677   72782 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 20:03:40.767716   72782 command_runner.go:130] > # add_inheritable_capabilities = true
	I1205 20:03:40.767742   72782 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 20:03:40.767779   72782 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:03:40.767803   72782 command_runner.go:130] > # default_sysctls = [
	I1205 20:03:40.767837   72782 command_runner.go:130] > # ]
	I1205 20:03:40.767857   72782 command_runner.go:130] > # List of devices on the host that a
	I1205 20:03:40.767895   72782 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 20:03:40.767925   72782 command_runner.go:130] > # allowed_devices = [
	I1205 20:03:40.768055   72782 command_runner.go:130] > # 	"/dev/fuse",
	I1205 20:03:40.768113   72782 command_runner.go:130] > # ]
	I1205 20:03:40.768170   72782 command_runner.go:130] > # List of additional devices. specified as
	I1205 20:03:40.768250   72782 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 20:03:40.768321   72782 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 20:03:40.768389   72782 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:03:40.768416   72782 command_runner.go:130] > # additional_devices = [
	I1205 20:03:40.768451   72782 command_runner.go:130] > # ]
	I1205 20:03:40.768487   72782 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 20:03:40.768505   72782 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 20:03:40.768520   72782 command_runner.go:130] > # 	"/etc/cdi",
	I1205 20:03:40.768555   72782 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 20:03:40.768587   72782 command_runner.go:130] > # ]
	I1205 20:03:40.768608   72782 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 20:03:40.768629   72782 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 20:03:40.768662   72782 command_runner.go:130] > # Defaults to false.
	I1205 20:03:40.768686   72782 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 20:03:40.768708   72782 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 20:03:40.768748   72782 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 20:03:40.768768   72782 command_runner.go:130] > # hooks_dir = [
	I1205 20:03:40.768785   72782 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 20:03:40.768818   72782 command_runner.go:130] > # ]
	I1205 20:03:40.768843   72782 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 20:03:40.768866   72782 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 20:03:40.768909   72782 command_runner.go:130] > # its default mounts from the following two files:
	I1205 20:03:40.768928   72782 command_runner.go:130] > #
	I1205 20:03:40.768949   72782 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 20:03:40.768988   72782 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 20:03:40.769009   72782 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 20:03:40.769040   72782 command_runner.go:130] > #
	I1205 20:03:40.769071   72782 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 20:03:40.769097   72782 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 20:03:40.769129   72782 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 20:03:40.769149   72782 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 20:03:40.769175   72782 command_runner.go:130] > #
	I1205 20:03:40.769212   72782 command_runner.go:130] > # default_mounts_file = ""
	I1205 20:03:40.769232   72782 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 20:03:40.769253   72782 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 20:03:40.769378   72782 command_runner.go:130] > # pids_limit = 0
	I1205 20:03:40.769408   72782 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 20:03:40.769431   72782 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 20:03:40.769469   72782 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 20:03:40.769494   72782 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 20:03:40.769514   72782 command_runner.go:130] > # log_size_max = -1
	I1205 20:03:40.769553   72782 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1205 20:03:40.769578   72782 command_runner.go:130] > # log_to_journald = false
	I1205 20:03:40.769599   72782 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 20:03:40.769636   72782 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 20:03:40.769656   72782 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 20:03:40.769677   72782 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 20:03:40.769715   72782 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 20:03:40.769733   72782 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 20:03:40.769753   72782 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 20:03:40.769790   72782 command_runner.go:130] > # read_only = false
	I1205 20:03:40.769816   72782 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 20:03:40.769837   72782 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 20:03:40.769868   72782 command_runner.go:130] > # live configuration reload.
	I1205 20:03:40.769892   72782 command_runner.go:130] > # log_level = "info"
	I1205 20:03:40.769916   72782 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 20:03:40.769949   72782 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:03:40.769973   72782 command_runner.go:130] > # log_filter = ""
	I1205 20:03:40.769995   72782 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 20:03:40.770029   72782 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 20:03:40.770051   72782 command_runner.go:130] > # separated by comma.
	I1205 20:03:40.770070   72782 command_runner.go:130] > # uid_mappings = ""
	I1205 20:03:40.770106   72782 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 20:03:40.770128   72782 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 20:03:40.770147   72782 command_runner.go:130] > # separated by comma.
	I1205 20:03:40.770181   72782 command_runner.go:130] > # gid_mappings = ""
	I1205 20:03:40.770203   72782 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 20:03:40.770224   72782 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:03:40.770261   72782 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:03:40.770284   72782 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 20:03:40.770304   72782 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 20:03:40.770343   72782 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:03:40.770367   72782 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:03:40.770386   72782 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 20:03:40.770406   72782 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 20:03:40.770447   72782 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 20:03:40.770473   72782 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 20:03:40.770507   72782 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 20:03:40.770531   72782 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 20:03:40.770551   72782 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 20:03:40.770591   72782 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 20:03:40.770616   72782 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 20:03:40.770634   72782 command_runner.go:130] > # drop_infra_ctr = true
	I1205 20:03:40.770669   72782 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 20:03:40.770706   72782 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 20:03:40.770741   72782 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 20:03:40.770784   72782 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 20:03:40.770808   72782 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 20:03:40.770840   72782 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 20:03:40.770860   72782 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 20:03:40.770898   72782 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 20:03:40.770929   72782 command_runner.go:130] > # pinns_path = ""
	I1205 20:03:40.770951   72782 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 20:03:40.771012   72782 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1205 20:03:40.771034   72782 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1205 20:03:40.771053   72782 command_runner.go:130] > # default_runtime = "runc"
	I1205 20:03:40.771095   72782 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 20:03:40.771119   72782 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 20:03:40.771165   72782 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1205 20:03:40.771192   72782 command_runner.go:130] > # creation as a file is not desired either.
	I1205 20:03:40.771216   72782 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 20:03:40.771268   72782 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 20:03:40.771289   72782 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 20:03:40.771306   72782 command_runner.go:130] > # ]
	I1205 20:03:40.771349   72782 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 20:03:40.771448   72782 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 20:03:40.771482   72782 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1205 20:03:40.771502   72782 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1205 20:03:40.771543   72782 command_runner.go:130] > #
	I1205 20:03:40.771564   72782 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1205 20:03:40.771585   72782 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1205 20:03:40.771707   72782 command_runner.go:130] > #  runtime_type = "oci"
	I1205 20:03:40.771727   72782 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1205 20:03:40.771747   72782 command_runner.go:130] > #  privileged_without_host_devices = false
	I1205 20:03:40.771809   72782 command_runner.go:130] > #  allowed_annotations = []
	I1205 20:03:40.771830   72782 command_runner.go:130] > # Where:
	I1205 20:03:40.771850   72782 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1205 20:03:40.771892   72782 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1205 20:03:40.771923   72782 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 20:03:40.771944   72782 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 20:03:40.771960   72782 command_runner.go:130] > #   in $PATH.
	I1205 20:03:40.772071   72782 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1205 20:03:40.772102   72782 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 20:03:40.772122   72782 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1205 20:03:40.772138   72782 command_runner.go:130] > #   state.
	I1205 20:03:40.772158   72782 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 20:03:40.772195   72782 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 20:03:40.772218   72782 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 20:03:40.772238   72782 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 20:03:40.772267   72782 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 20:03:40.772296   72782 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 20:03:40.772315   72782 command_runner.go:130] > #   The currently recognized values are:
	I1205 20:03:40.772337   72782 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 20:03:40.772368   72782 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 20:03:40.772396   72782 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 20:03:40.772416   72782 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 20:03:40.772439   72782 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 20:03:40.772468   72782 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 20:03:40.772496   72782 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 20:03:40.772518   72782 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1205 20:03:40.772536   72782 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 20:03:40.772652   72782 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 20:03:40.772687   72782 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1205 20:03:40.772704   72782 command_runner.go:130] > runtime_type = "oci"
	I1205 20:03:40.772745   72782 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 20:03:40.772765   72782 command_runner.go:130] > runtime_config_path = ""
	I1205 20:03:40.772784   72782 command_runner.go:130] > monitor_path = ""
	I1205 20:03:40.772812   72782 command_runner.go:130] > monitor_cgroup = ""
	I1205 20:03:40.772837   72782 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 20:03:40.772868   72782 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1205 20:03:40.772885   72782 command_runner.go:130] > # running containers
	I1205 20:03:40.772911   72782 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1205 20:03:40.772941   72782 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1205 20:03:40.772967   72782 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1205 20:03:40.772987   72782 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1205 20:03:40.773016   72782 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1205 20:03:40.773035   72782 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1205 20:03:40.773053   72782 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1205 20:03:40.773070   72782 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1205 20:03:40.773123   72782 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1205 20:03:40.773142   72782 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1205 20:03:40.773162   72782 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 20:03:40.773198   72782 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 20:03:40.773221   72782 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 20:03:40.773243   72782 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 20:03:40.773274   72782 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 20:03:40.773305   72782 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 20:03:40.773329   72782 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 20:03:40.773350   72782 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 20:03:40.773377   72782 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 20:03:40.773407   72782 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 20:03:40.773424   72782 command_runner.go:130] > # Example:
	I1205 20:03:40.773441   72782 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 20:03:40.773460   72782 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 20:03:40.773495   72782 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 20:03:40.773514   72782 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 20:03:40.773533   72782 command_runner.go:130] > # cpuset = 0
	I1205 20:03:40.773551   72782 command_runner.go:130] > # cpushares = "0-1"
	I1205 20:03:40.773586   72782 command_runner.go:130] > # Where:
	I1205 20:03:40.773604   72782 command_runner.go:130] > # The workload name is workload-type.
	I1205 20:03:40.773625   72782 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 20:03:40.773644   72782 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 20:03:40.773679   72782 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 20:03:40.773702   72782 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 20:03:40.773722   72782 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 20:03:40.773747   72782 command_runner.go:130] > # 
	I1205 20:03:40.773774   72782 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 20:03:40.773792   72782 command_runner.go:130] > #
	I1205 20:03:40.773816   72782 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 20:03:40.774009   72782 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 20:03:40.774062   72782 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 20:03:40.774084   72782 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 20:03:40.774113   72782 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 20:03:40.774138   72782 command_runner.go:130] > [crio.image]
	I1205 20:03:40.774158   72782 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 20:03:40.774176   72782 command_runner.go:130] > # default_transport = "docker://"
	I1205 20:03:40.774195   72782 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 20:03:40.774233   72782 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:03:40.774252   72782 command_runner.go:130] > # global_auth_file = ""
	I1205 20:03:40.774273   72782 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 20:03:40.774301   72782 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:03:40.774327   72782 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1205 20:03:40.774348   72782 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 20:03:40.774369   72782 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:03:40.774396   72782 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:03:40.774421   72782 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 20:03:40.774440   72782 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 20:03:40.774460   72782 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 20:03:40.774488   72782 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 20:03:40.774515   72782 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 20:03:40.774533   72782 command_runner.go:130] > # pause_command = "/pause"
	I1205 20:03:40.774552   72782 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 20:03:40.774570   72782 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 20:03:40.774612   72782 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 20:03:40.774631   72782 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 20:03:40.774648   72782 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 20:03:40.774677   72782 command_runner.go:130] > # signature_policy = ""
	I1205 20:03:40.774704   72782 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 20:03:40.774725   72782 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 20:03:40.774741   72782 command_runner.go:130] > # changing them here.
	I1205 20:03:40.774775   72782 command_runner.go:130] > # insecure_registries = [
	I1205 20:03:40.774801   72782 command_runner.go:130] > # ]
	I1205 20:03:40.774822   72782 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 20:03:40.774842   72782 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 20:03:40.774874   72782 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 20:03:40.774903   72782 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 20:03:40.774922   72782 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 20:03:40.774944   72782 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 20:03:40.774991   72782 command_runner.go:130] > # CNI plugins.
	I1205 20:03:40.775019   72782 command_runner.go:130] > [crio.network]
	I1205 20:03:40.775037   72782 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 20:03:40.775056   72782 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 20:03:40.775073   72782 command_runner.go:130] > # cni_default_network = ""
	I1205 20:03:40.775110   72782 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 20:03:40.775127   72782 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 20:03:40.775148   72782 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 20:03:40.775165   72782 command_runner.go:130] > # plugin_dirs = [
	I1205 20:03:40.775199   72782 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 20:03:40.775214   72782 command_runner.go:130] > # ]
	I1205 20:03:40.775234   72782 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 20:03:40.775250   72782 command_runner.go:130] > [crio.metrics]
	I1205 20:03:40.775276   72782 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 20:03:40.775295   72782 command_runner.go:130] > # enable_metrics = false
	I1205 20:03:40.775314   72782 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 20:03:40.775330   72782 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 20:03:40.775367   72782 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 20:03:40.775388   72782 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 20:03:40.775408   72782 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 20:03:40.775433   72782 command_runner.go:130] > # metrics_collectors = [
	I1205 20:03:40.775458   72782 command_runner.go:130] > # 	"operations",
	I1205 20:03:40.775476   72782 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 20:03:40.775494   72782 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 20:03:40.775512   72782 command_runner.go:130] > # 	"operations_errors",
	I1205 20:03:40.775543   72782 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 20:03:40.775563   72782 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 20:03:40.775581   72782 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 20:03:40.775599   72782 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 20:03:40.775633   72782 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 20:03:40.775651   72782 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 20:03:40.775668   72782 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 20:03:40.775686   72782 command_runner.go:130] > # 	"containers_oom_total",
	I1205 20:03:40.775717   72782 command_runner.go:130] > # 	"containers_oom",
	I1205 20:03:40.775737   72782 command_runner.go:130] > # 	"processes_defunct",
	I1205 20:03:40.775770   72782 command_runner.go:130] > # 	"operations_total",
	I1205 20:03:40.775793   72782 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 20:03:40.775800   72782 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 20:03:40.775805   72782 command_runner.go:130] > # 	"operations_errors_total",
	I1205 20:03:40.775812   72782 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 20:03:40.775818   72782 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 20:03:40.775830   72782 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 20:03:40.775839   72782 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 20:03:40.775847   72782 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 20:03:40.775855   72782 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 20:03:40.775859   72782 command_runner.go:130] > # ]
	I1205 20:03:40.775868   72782 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 20:03:40.775873   72782 command_runner.go:130] > # metrics_port = 9090
	I1205 20:03:40.775880   72782 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 20:03:40.775885   72782 command_runner.go:130] > # metrics_socket = ""
	I1205 20:03:40.775895   72782 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 20:03:40.775902   72782 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 20:03:40.775913   72782 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 20:03:40.775925   72782 command_runner.go:130] > # certificate on any modification event.
	I1205 20:03:40.775935   72782 command_runner.go:130] > # metrics_cert = ""
	I1205 20:03:40.775942   72782 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 20:03:40.775951   72782 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 20:03:40.775956   72782 command_runner.go:130] > # metrics_key = ""
	I1205 20:03:40.775963   72782 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 20:03:40.775968   72782 command_runner.go:130] > [crio.tracing]
	I1205 20:03:40.775977   72782 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 20:03:40.775985   72782 command_runner.go:130] > # enable_tracing = false
	I1205 20:03:40.775993   72782 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 20:03:40.776002   72782 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 20:03:40.776009   72782 command_runner.go:130] > # Number of samples to collect per million spans.
	I1205 20:03:40.776017   72782 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 20:03:40.776024   72782 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 20:03:40.776031   72782 command_runner.go:130] > [crio.stats]
	I1205 20:03:40.776038   72782 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 20:03:40.776046   72782 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 20:03:40.776052   72782 command_runner.go:130] > # stats_collection_period = 0
	I1205 20:03:40.776087   72782 command_runner.go:130] ! time="2023-12-05 20:03:40.736617040Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1205 20:03:40.776104   72782 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 20:03:40.776178   72782 cni.go:84] Creating CNI manager for ""
	I1205 20:03:40.776197   72782 cni.go:136] 1 nodes found, recommending kindnet
	I1205 20:03:40.776227   72782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:03:40.776249   72782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-930892 NodeName:multinode-930892 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:03:40.776391   72782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-930892"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:03:40.776451   72782 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-930892 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-930892 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:03:40.776517   72782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:03:40.785743   72782 command_runner.go:130] > kubeadm
	I1205 20:03:40.785765   72782 command_runner.go:130] > kubectl
	I1205 20:03:40.785771   72782 command_runner.go:130] > kubelet
	I1205 20:03:40.786795   72782 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:03:40.786876   72782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:03:40.796662   72782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1205 20:03:40.817020   72782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:03:40.836895   72782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1205 20:03:40.856781   72782 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1205 20:03:40.860985   72782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:03:40.873544   72782 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892 for IP: 192.168.58.2
	I1205 20:03:40.873575   72782 certs.go:190] acquiring lock for shared ca certs: {Name:mk8ef93a51958e82275f202c3866b092b6aa4ced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:40.873700   72782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key
	I1205 20:03:40.873745   72782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key
	I1205 20:03:40.873793   72782 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.key
	I1205 20:03:40.873810   72782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.crt with IP's: []
	I1205 20:03:41.290669   72782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.crt ...
	I1205 20:03:41.290703   72782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.crt: {Name:mkaba892ba7b661fee3ebed1fee6cacaaa923f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:41.290898   72782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.key ...
	I1205 20:03:41.290910   72782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.key: {Name:mk122b5c4659dd3f3e571e54f47cc7b0c8748fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:41.290999   72782 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.key.cee25041
	I1205 20:03:41.291014   72782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 20:03:41.850241   72782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.crt.cee25041 ...
	I1205 20:03:41.850271   72782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.crt.cee25041: {Name:mk54d74dac6810dbc9d50a194c2ba9c96af15500 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:41.850444   72782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.key.cee25041 ...
	I1205 20:03:41.850457   72782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.key.cee25041: {Name:mk3f12667652f29069d382fc67b8dfa885098ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:41.850534   72782 certs.go:337] copying /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.crt
	I1205 20:03:41.850612   72782 certs.go:341] copying /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.key
	I1205 20:03:41.850676   72782 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/proxy-client.key
	I1205 20:03:41.850693   72782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/proxy-client.crt with IP's: []
	I1205 20:03:42.596806   72782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/proxy-client.crt ...
	I1205 20:03:42.596837   72782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/proxy-client.crt: {Name:mk210a05f816e6890aef0f518f4238e3f1e263aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:42.597072   72782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/proxy-client.key ...
	I1205 20:03:42.597087   72782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/proxy-client.key: {Name:mkec43560f7ad0bf12a2e03dd239088a2f209b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:42.597173   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:03:42.597194   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:03:42.597207   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:03:42.597221   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:03:42.597233   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:03:42.597250   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:03:42.597267   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:03:42.597282   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:03:42.597331   72782 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/7773.pem (1338 bytes)
	W1205 20:03:42.597378   72782 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/7773_empty.pem, impossibly tiny 0 bytes
	I1205 20:03:42.597395   72782 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:03:42.597424   72782 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:03:42.597455   72782 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:03:42.597486   72782 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem (1679 bytes)
	I1205 20:03:42.597535   72782 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem (1708 bytes)
	I1205 20:03:42.597568   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> /usr/share/ca-certificates/77732.pem
	I1205 20:03:42.597586   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:03:42.597598   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/7773.pem -> /usr/share/ca-certificates/7773.pem
	I1205 20:03:42.598178   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:03:42.624770   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:03:42.652316   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:03:42.679142   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:03:42.705297   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:03:42.732489   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 20:03:42.759721   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:03:42.786318   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:03:42.812929   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem --> /usr/share/ca-certificates/77732.pem (1708 bytes)
	I1205 20:03:42.840377   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:03:42.866868   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/certs/7773.pem --> /usr/share/ca-certificates/7773.pem (1338 bytes)
	I1205 20:03:42.893211   72782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:03:42.912912   72782 ssh_runner.go:195] Run: openssl version
	I1205 20:03:42.919296   72782 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1205 20:03:42.919694   72782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77732.pem && ln -fs /usr/share/ca-certificates/77732.pem /etc/ssl/certs/77732.pem"
	I1205 20:03:42.930993   72782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77732.pem
	I1205 20:03:42.935310   72782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/77732.pem
	I1205 20:03:42.935564   72782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/77732.pem
	I1205 20:03:42.935630   72782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77732.pem
	I1205 20:03:42.943571   72782 command_runner.go:130] > 3ec20f2e
	I1205 20:03:42.943995   72782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77732.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:03:42.955191   72782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:03:42.966406   72782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:03:42.970911   72782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:03:42.971031   72782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:03:42.971103   72782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:03:42.978830   72782 command_runner.go:130] > b5213941
	I1205 20:03:42.979285   72782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:03:42.990608   72782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7773.pem && ln -fs /usr/share/ca-certificates/7773.pem /etc/ssl/certs/7773.pem"
	I1205 20:03:43.001726   72782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7773.pem
	I1205 20:03:43.006511   72782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/7773.pem
	I1205 20:03:43.006604   72782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/7773.pem
	I1205 20:03:43.006676   72782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7773.pem
	I1205 20:03:43.014371   72782 command_runner.go:130] > 51391683
	I1205 20:03:43.014792   72782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7773.pem /etc/ssl/certs/51391683.0"
	I1205 20:03:43.025736   72782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:03:43.030085   72782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:03:43.030161   72782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:03:43.030219   72782 kubeadm.go:404] StartCluster: {Name:multinode-930892 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-930892 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:03:43.030290   72782 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:03:43.030355   72782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:03:43.070707   72782 cri.go:89] found id: ""
	I1205 20:03:43.070817   72782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:03:43.081423   72782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1205 20:03:43.081488   72782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1205 20:03:43.081503   72782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1205 20:03:43.081573   72782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:03:43.092081   72782 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1205 20:03:43.092155   72782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:03:43.102399   72782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1205 20:03:43.102426   72782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1205 20:03:43.102435   72782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1205 20:03:43.102446   72782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:03:43.102474   72782 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:03:43.102506   72782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1205 20:03:43.153737   72782 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 20:03:43.153820   72782 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1205 20:03:43.154117   72782 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:03:43.154135   72782 command_runner.go:130] > [preflight] Running pre-flight checks
	I1205 20:03:43.201076   72782 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1205 20:03:43.201109   72782 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1205 20:03:43.201161   72782 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1205 20:03:43.201172   72782 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1050-aws
	I1205 20:03:43.201205   72782 kubeadm.go:322] OS: Linux
	I1205 20:03:43.201213   72782 command_runner.go:130] > OS: Linux
	I1205 20:03:43.201255   72782 kubeadm.go:322] CGROUPS_CPU: enabled
	I1205 20:03:43.201264   72782 command_runner.go:130] > CGROUPS_CPU: enabled
	I1205 20:03:43.201308   72782 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1205 20:03:43.201316   72782 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1205 20:03:43.201359   72782 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1205 20:03:43.201368   72782 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1205 20:03:43.201413   72782 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1205 20:03:43.201421   72782 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1205 20:03:43.201468   72782 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1205 20:03:43.201477   72782 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1205 20:03:43.201521   72782 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1205 20:03:43.201530   72782 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1205 20:03:43.201572   72782 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1205 20:03:43.201580   72782 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1205 20:03:43.201625   72782 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1205 20:03:43.201634   72782 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1205 20:03:43.201677   72782 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1205 20:03:43.201683   72782 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1205 20:03:43.279177   72782 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:03:43.279204   72782 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:03:43.279293   72782 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:03:43.279303   72782 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:03:43.279388   72782 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:03:43.279397   72782 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:03:43.512151   72782 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:03:43.516499   72782 out.go:204]   - Generating certificates and keys ...
	I1205 20:03:43.512214   72782 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:03:43.516632   72782 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:03:43.516670   72782 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1205 20:03:43.516744   72782 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:03:43.516774   72782 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1205 20:03:43.880536   72782 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:03:43.880628   72782 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:03:44.213960   72782 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:03:44.214046   72782 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:03:44.504653   72782 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:03:44.504725   72782 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1205 20:03:44.827957   72782 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 20:03:44.827994   72782 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1205 20:03:46.140193   72782 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 20:03:46.140223   72782 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1205 20:03:46.140521   72782 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-930892] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1205 20:03:46.140537   72782 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-930892] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1205 20:03:47.298644   72782 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 20:03:47.298685   72782 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1205 20:03:47.299031   72782 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-930892] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1205 20:03:47.299047   72782 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-930892] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1205 20:03:47.745211   72782 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:03:47.745241   72782 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:03:48.089965   72782 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:03:48.089994   72782 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:03:48.460137   72782 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 20:03:48.460161   72782 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1205 20:03:48.460395   72782 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:03:48.460406   72782 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:03:49.040097   72782 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:03:49.040121   72782 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:03:49.200984   72782 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:03:49.201008   72782 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:03:49.405309   72782 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:03:49.405332   72782 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:03:50.552913   72782 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:03:50.552938   72782 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:03:50.553870   72782 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:03:50.553884   72782 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:03:50.556972   72782 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:03:50.559667   72782 out.go:204]   - Booting up control plane ...
	I1205 20:03:50.557058   72782 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:03:50.559774   72782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:03:50.559786   72782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:03:50.559906   72782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:03:50.559913   72782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:03:50.560385   72782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:03:50.560399   72782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:03:50.573027   72782 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:03:50.573052   72782 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:03:50.573159   72782 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:03:50.573179   72782 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:03:50.573225   72782 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:03:50.573238   72782 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1205 20:03:50.668130   72782 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:03:50.668155   72782 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:03:58.670236   72782 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002687 seconds
	I1205 20:03:58.670265   72782 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002687 seconds
	I1205 20:03:58.670365   72782 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:03:58.670375   72782 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:03:58.684493   72782 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:03:58.684522   72782 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:03:59.208453   72782 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:03:59.208482   72782 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:03:59.208658   72782 kubeadm.go:322] [mark-control-plane] Marking the node multinode-930892 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:03:59.208669   72782 command_runner.go:130] > [mark-control-plane] Marking the node multinode-930892 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:03:59.720294   72782 kubeadm.go:322] [bootstrap-token] Using token: cb1epv.pxz63p779z30zkxz
	I1205 20:03:59.722056   72782 out.go:204]   - Configuring RBAC rules ...
	I1205 20:03:59.720418   72782 command_runner.go:130] > [bootstrap-token] Using token: cb1epv.pxz63p779z30zkxz
	I1205 20:03:59.722170   72782 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:03:59.722186   72782 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:03:59.727662   72782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:03:59.727681   72782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:03:59.734859   72782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:03:59.734884   72782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:03:59.740294   72782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:03:59.740320   72782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:03:59.745331   72782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:03:59.745353   72782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:03:59.748692   72782 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:03:59.748710   72782 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:03:59.762872   72782 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:03:59.762896   72782 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:04:00.009997   72782 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:04:00.010024   72782 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1205 20:04:00.136192   72782 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:04:00.136216   72782 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1205 20:04:00.136223   72782 kubeadm.go:322] 
	I1205 20:04:00.136280   72782 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:04:00.136285   72782 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1205 20:04:00.136289   72782 kubeadm.go:322] 
	I1205 20:04:00.136361   72782 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:04:00.136374   72782 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1205 20:04:00.136379   72782 kubeadm.go:322] 
	I1205 20:04:00.136403   72782 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:04:00.136407   72782 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1205 20:04:00.136462   72782 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:04:00.136467   72782 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:04:00.136514   72782 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:04:00.136519   72782 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:04:00.136523   72782 kubeadm.go:322] 
	I1205 20:04:00.136574   72782 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:04:00.136578   72782 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1205 20:04:00.136582   72782 kubeadm.go:322] 
	I1205 20:04:00.136627   72782 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:04:00.136632   72782 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:04:00.136636   72782 kubeadm.go:322] 
	I1205 20:04:00.136684   72782 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:04:00.136689   72782 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1205 20:04:00.136759   72782 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:04:00.136771   72782 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:04:00.136835   72782 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:04:00.136839   72782 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:04:00.136843   72782 kubeadm.go:322] 
	I1205 20:04:00.136923   72782 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:04:00.136927   72782 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:04:00.136999   72782 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:04:00.137003   72782 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1205 20:04:00.137008   72782 kubeadm.go:322] 
	I1205 20:04:00.137087   72782 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cb1epv.pxz63p779z30zkxz \
	I1205 20:04:00.137091   72782 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token cb1epv.pxz63p779z30zkxz \
	I1205 20:04:00.137189   72782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 \
	I1205 20:04:00.137194   72782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 \
	I1205 20:04:00.137214   72782 kubeadm.go:322] 	--control-plane 
	I1205 20:04:00.137219   72782 command_runner.go:130] > 	--control-plane 
	I1205 20:04:00.137223   72782 kubeadm.go:322] 
	I1205 20:04:00.137303   72782 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:04:00.137309   72782 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:04:00.137315   72782 kubeadm.go:322] 
	I1205 20:04:00.137392   72782 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cb1epv.pxz63p779z30zkxz \
	I1205 20:04:00.137397   72782 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token cb1epv.pxz63p779z30zkxz \
	I1205 20:04:00.137491   72782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 
	I1205 20:04:00.137496   72782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 
	I1205 20:04:00.139591   72782 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1205 20:04:00.139676   72782 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1205 20:04:00.139903   72782 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:04:00.139917   72782 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:04:00.139937   72782 cni.go:84] Creating CNI manager for ""
	I1205 20:04:00.139944   72782 cni.go:136] 1 nodes found, recommending kindnet
	I1205 20:04:00.142277   72782 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 20:04:00.144333   72782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:04:00.172087   72782 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1205 20:04:00.172115   72782 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1205 20:04:00.172125   72782 command_runner.go:130] > Device: 3ah/58d	Inode: 1092525     Links: 1
	I1205 20:04:00.172134   72782 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:04:00.172141   72782 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1205 20:04:00.172155   72782 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1205 20:04:00.172162   72782 command_runner.go:130] > Change: 2023-12-05 19:35:53.617733280 +0000
	I1205 20:04:00.172168   72782 command_runner.go:130] >  Birth: 2023-12-05 19:35:53.577733006 +0000
	I1205 20:04:00.172806   72782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 20:04:00.172822   72782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 20:04:00.204052   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:04:01.002934   72782 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1205 20:04:01.011797   72782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1205 20:04:01.020284   72782 command_runner.go:130] > serviceaccount/kindnet created
	I1205 20:04:01.031513   72782 command_runner.go:130] > daemonset.apps/kindnet created
	I1205 20:04:01.036737   72782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:04:01.036833   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:01.036920   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=multinode-930892 minikube.k8s.io/updated_at=2023_12_05T20_04_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:01.163531   72782 command_runner.go:130] > node/multinode-930892 labeled
	I1205 20:04:01.185412   72782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1205 20:04:01.188952   72782 command_runner.go:130] > -16
	I1205 20:04:01.188981   72782 ops.go:34] apiserver oom_adj: -16
	I1205 20:04:01.189060   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:01.304766   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:01.304855   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:01.393955   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:01.894310   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:01.982004   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:02.394499   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:02.478249   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:02.894263   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:02.981262   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:03.394274   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:03.477934   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:03.895202   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:03.984204   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:04.394700   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:04.483503   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:04.895101   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:04.984720   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:05.394255   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:05.478399   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:05.894550   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:05.982413   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:06.395023   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:06.485189   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:06.894480   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:06.980322   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:07.394478   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:07.487913   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:07.894480   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:07.983900   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:08.394474   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:08.481159   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:08.894763   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:08.989264   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:09.394893   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:09.489946   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:09.894234   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:09.987474   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:10.394259   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:10.494418   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:10.894202   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:10.988690   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:11.394326   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:11.484601   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:11.894229   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:11.986472   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:12.395080   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:12.486308   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:12.894969   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:13.044791   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:13.394349   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:13.496342   72782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:13.895050   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:13.994914   72782 command_runner.go:130] > NAME      SECRETS   AGE
	I1205 20:04:13.994938   72782 command_runner.go:130] > default   0         0s
	I1205 20:04:13.998098   72782 kubeadm.go:1088] duration metric: took 12.961309016s to wait for elevateKubeSystemPrivileges.
	I1205 20:04:13.998130   72782 kubeadm.go:406] StartCluster complete in 30.96791342s
	I1205 20:04:13.998147   72782 settings.go:142] acquiring lock: {Name:mk9158e056caaf62837361622cedbf37e18c3f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:04:13.998210   72782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 20:04:13.998875   72782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/kubeconfig: {Name:mka2e3e3347ae085678ba2bb20225628c9c86ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:04:13.999363   72782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 20:04:13.999630   72782 kapi.go:59] client config for multinode-930892: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.key", CAFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:04:13.999906   72782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:04:14.000164   72782 config.go:182] Loaded profile config "multinode-930892": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:04:14.000335   72782 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:04:14.000395   72782 addons.go:69] Setting storage-provisioner=true in profile "multinode-930892"
	I1205 20:04:14.000405   72782 addons.go:231] Setting addon storage-provisioner=true in "multinode-930892"
	I1205 20:04:14.000466   72782 host.go:66] Checking if "multinode-930892" exists ...
	I1205 20:04:14.000541   72782 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:04:14.000555   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:14.000567   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.000574   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:14.000741   72782 cert_rotation.go:137] Starting client certificate rotation controller
	I1205 20:04:14.000960   72782 cli_runner.go:164] Run: docker container inspect multinode-930892 --format={{.State.Status}}
	I1205 20:04:14.001081   72782 addons.go:69] Setting default-storageclass=true in profile "multinode-930892"
	I1205 20:04:14.001091   72782 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-930892"
	I1205 20:04:14.001391   72782 cli_runner.go:164] Run: docker container inspect multinode-930892 --format={{.State.Status}}
	I1205 20:04:14.032721   72782 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I1205 20:04:14.032745   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:14.032754   72782 round_trippers.go:580]     Audit-Id: 02c356cf-53b1-4605-bb6f-4182a7d89799
	I1205 20:04:14.032761   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.032767   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.032773   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:14.032780   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:14.032789   72782 round_trippers.go:580]     Content-Length: 291
	I1205 20:04:14.032799   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.043918   72782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"29176da8-1129-498f-981f-e9a68ede7ad4","resourceVersion":"379","creationTimestamp":"2023-12-05T20:03:59Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1205 20:04:14.044353   72782 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"29176da8-1129-498f-981f-e9a68ede7ad4","resourceVersion":"379","creationTimestamp":"2023-12-05T20:03:59Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1205 20:04:14.044402   72782 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:04:14.044409   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:14.044417   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.044424   72782 round_trippers.go:473]     Content-Type: application/json
	I1205 20:04:14.044431   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:14.066381   72782 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:04:14.064971   72782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 20:04:14.068690   72782 kapi.go:59] client config for multinode-930892: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.key", CAFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:04:14.068950   72782 addons.go:231] Setting addon default-storageclass=true in "multinode-930892"
	I1205 20:04:14.068980   72782 host.go:66] Checking if "multinode-930892" exists ...
	I1205 20:04:14.069452   72782 cli_runner.go:164] Run: docker container inspect multinode-930892 --format={{.State.Status}}
	I1205 20:04:14.069666   72782 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:04:14.069687   72782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:04:14.069722   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892
	I1205 20:04:14.108872   72782 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:04:14.108892   72782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:04:14.108952   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892
	I1205 20:04:14.109097   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892/id_rsa Username:docker}
	I1205 20:04:14.119570   72782 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I1205 20:04:14.119599   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:14.119608   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.119615   72782 round_trippers.go:580]     Audit-Id: 3b6e2d14-301a-43fc-be4c-89e643036e06
	I1205 20:04:14.119631   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.119638   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.119650   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:14.119656   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:14.119663   72782 round_trippers.go:580]     Content-Length: 291
	I1205 20:04:14.119688   72782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"29176da8-1129-498f-981f-e9a68ede7ad4","resourceVersion":"387","creationTimestamp":"2023-12-05T20:03:59Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1205 20:04:14.119911   72782 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:04:14.119928   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:14.119937   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.119944   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:14.131310   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892/id_rsa Username:docker}
	I1205 20:04:14.190640   72782 round_trippers.go:574] Response Status: 200 OK in 70 milliseconds
	I1205 20:04:14.190665   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:14.190674   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.190681   72782 round_trippers.go:580]     Audit-Id: 79ff5bc0-0ee6-4125-994a-c1df6163d58f
	I1205 20:04:14.190687   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.190693   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.190699   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:14.190711   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:14.190717   72782 round_trippers.go:580]     Content-Length: 291
	I1205 20:04:14.192292   72782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"29176da8-1129-498f-981f-e9a68ede7ad4","resourceVersion":"387","creationTimestamp":"2023-12-05T20:03:59Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1205 20:04:14.192390   72782 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-930892" context rescaled to 1 replicas
	I1205 20:04:14.192420   72782 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:04:14.195237   72782 out.go:177] * Verifying Kubernetes components...
	I1205 20:04:14.197169   72782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:04:14.242298   72782 command_runner.go:130] > apiVersion: v1
	I1205 20:04:14.242316   72782 command_runner.go:130] > data:
	I1205 20:04:14.242322   72782 command_runner.go:130] >   Corefile: |
	I1205 20:04:14.242329   72782 command_runner.go:130] >     .:53 {
	I1205 20:04:14.242335   72782 command_runner.go:130] >         errors
	I1205 20:04:14.242340   72782 command_runner.go:130] >         health {
	I1205 20:04:14.242346   72782 command_runner.go:130] >            lameduck 5s
	I1205 20:04:14.242350   72782 command_runner.go:130] >         }
	I1205 20:04:14.242355   72782 command_runner.go:130] >         ready
	I1205 20:04:14.242363   72782 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1205 20:04:14.242371   72782 command_runner.go:130] >            pods insecure
	I1205 20:04:14.242378   72782 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1205 20:04:14.242391   72782 command_runner.go:130] >            ttl 30
	I1205 20:04:14.242396   72782 command_runner.go:130] >         }
	I1205 20:04:14.242401   72782 command_runner.go:130] >         prometheus :9153
	I1205 20:04:14.242411   72782 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1205 20:04:14.242417   72782 command_runner.go:130] >            max_concurrent 1000
	I1205 20:04:14.242458   72782 command_runner.go:130] >         }
	I1205 20:04:14.242470   72782 command_runner.go:130] >         cache 30
	I1205 20:04:14.242475   72782 command_runner.go:130] >         loop
	I1205 20:04:14.242486   72782 command_runner.go:130] >         reload
	I1205 20:04:14.242498   72782 command_runner.go:130] >         loadbalance
	I1205 20:04:14.242506   72782 command_runner.go:130] >     }
	I1205 20:04:14.242511   72782 command_runner.go:130] > kind: ConfigMap
	I1205 20:04:14.242518   72782 command_runner.go:130] > metadata:
	I1205 20:04:14.242529   72782 command_runner.go:130] >   creationTimestamp: "2023-12-05T20:03:59Z"
	I1205 20:04:14.242538   72782 command_runner.go:130] >   name: coredns
	I1205 20:04:14.242544   72782 command_runner.go:130] >   namespace: kube-system
	I1205 20:04:14.242549   72782 command_runner.go:130] >   resourceVersion: "254"
	I1205 20:04:14.242556   72782 command_runner.go:130] >   uid: 650f0fef-0719-44ea-8fa1-8bbb9f272ae4
	I1205 20:04:14.246266   72782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:04:14.246499   72782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 20:04:14.246833   72782 kapi.go:59] client config for multinode-930892: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.key", CAFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:04:14.247130   72782 node_ready.go:35] waiting up to 6m0s for node "multinode-930892" to be "Ready" ...
	I1205 20:04:14.247268   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:14.247293   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:14.247315   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.247338   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:14.261819   72782 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1205 20:04:14.261844   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:14.261852   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.261859   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:14.261873   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:14.261882   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.261898   72782 round_trippers.go:580]     Audit-Id: f83f875c-0948-4b40-aa61-100ace489563
	I1205 20:04:14.261909   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.263657   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:14.264511   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:14.264537   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:14.264547   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.264554   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:14.275586   72782 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1205 20:04:14.275613   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:14.275622   72782 round_trippers.go:580]     Audit-Id: a358c5ee-ece9-4d5f-bcc6-6d174f08c166
	I1205 20:04:14.275629   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.275635   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.275641   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:14.275654   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:14.275668   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.275823   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:14.279961   72782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:04:14.342461   72782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:04:14.777021   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:14.777053   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:14.777063   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.777070   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:14.779527   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:14.779603   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:14.779623   72782 round_trippers.go:580]     Audit-Id: ed1cf277-ca7d-4eaf-b268-5b843141e5e4
	I1205 20:04:14.779654   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.779675   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.779693   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:14.779712   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:14.779731   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.779871   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:14.837981   72782 command_runner.go:130] > configmap/coredns replaced
	I1205 20:04:14.842946   72782 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1205 20:04:14.900052   72782 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1205 20:04:14.904404   72782 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1205 20:04:14.904457   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:14.904480   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.904501   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:14.906858   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:14.906917   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:14.906938   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:14.906958   72782 round_trippers.go:580]     Content-Length: 1273
	I1205 20:04:14.906989   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.907015   72782 round_trippers.go:580]     Audit-Id: 36e28050-007e-4fc8-aa67-501d39686f89
	I1205 20:04:14.907035   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.907052   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.907071   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:14.913381   72782 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"407"},"items":[{"metadata":{"name":"standard","uid":"468e881b-14d1-446b-9036-debb2d2eb344","resourceVersion":"407","creationTimestamp":"2023-12-05T20:04:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-05T20:04:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1205 20:04:14.913861   72782 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"468e881b-14d1-446b-9036-debb2d2eb344","resourceVersion":"407","creationTimestamp":"2023-12-05T20:04:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-05T20:04:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1205 20:04:14.913940   72782 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1205 20:04:14.913960   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:14.914003   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:14.914022   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.914040   72782 round_trippers.go:473]     Content-Type: application/json
	I1205 20:04:14.919087   72782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:04:14.919137   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:14.919158   72782 round_trippers.go:580]     Audit-Id: f55d72a3-14cc-41df-8edc-bb02e6235f53
	I1205 20:04:14.919179   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.919210   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.919231   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:14.919251   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:14.919270   72782 round_trippers.go:580]     Content-Length: 1220
	I1205 20:04:14.919308   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.921570   72782 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"468e881b-14d1-446b-9036-debb2d2eb344","resourceVersion":"407","creationTimestamp":"2023-12-05T20:04:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-05T20:04:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1205 20:04:15.103451   72782 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1205 20:04:15.111927   72782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1205 20:04:15.120656   72782 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1205 20:04:15.131017   72782 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1205 20:04:15.139668   72782 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1205 20:04:15.151601   72782 command_runner.go:130] > pod/storage-provisioner created
	I1205 20:04:15.160862   72782 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1205 20:04:15.163521   72782 addons.go:502] enable addons completed in 1.163197975s: enabled=[default-storageclass storage-provisioner]
	I1205 20:04:15.277117   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:15.277140   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:15.277150   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:15.277157   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:15.286096   72782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1205 20:04:15.286123   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:15.286132   72782 round_trippers.go:580]     Audit-Id: ca8226f2-5942-4a3b-872c-6b588e43adaa
	I1205 20:04:15.286139   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:15.286145   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:15.286155   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:15.286162   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:15.286168   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:15 GMT
	I1205 20:04:15.286280   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:15.776412   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:15.776434   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:15.776443   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:15.776450   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:15.778917   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:15.778971   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:15.778992   72782 round_trippers.go:580]     Audit-Id: c7fe2707-b07c-450c-aee5-944908f30876
	I1205 20:04:15.779008   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:15.779015   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:15.779021   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:15.779028   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:15.779046   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:15 GMT
	I1205 20:04:15.779208   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:16.276493   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:16.276559   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:16.276598   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:16.276621   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:16.279003   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:16.279027   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:16.279036   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:16.279042   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:16 GMT
	I1205 20:04:16.279049   72782 round_trippers.go:580]     Audit-Id: 2c208ba8-fe78-4331-b747-a5088a2853bc
	I1205 20:04:16.279058   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:16.279065   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:16.279071   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:16.279204   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:16.279589   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:16.777307   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:16.777349   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:16.777359   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:16.777366   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:16.779817   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:16.779840   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:16.779848   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:16.779855   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:16.779862   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:16 GMT
	I1205 20:04:16.779868   72782 round_trippers.go:580]     Audit-Id: fcd423c6-45d7-42be-8b67-6d155ccc2e31
	I1205 20:04:16.779874   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:16.779881   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:16.779993   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:17.277047   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:17.277071   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:17.277082   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.277089   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:17.279915   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:17.279937   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:17.279946   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.279952   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:17.279959   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:17.279965   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.279971   72782 round_trippers.go:580]     Audit-Id: 5a93eb02-d95f-4a82-9c07-814809494a25
	I1205 20:04:17.279977   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.280139   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:17.777337   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:17.777363   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:17.777373   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.777380   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:17.779837   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:17.779859   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:17.779867   72782 round_trippers.go:580]     Audit-Id: 4d37a2ed-7f27-4b74-b865-3295ee5fcd76
	I1205 20:04:17.779874   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.779884   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.779890   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:17.779897   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:17.779904   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.780069   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:18.277155   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:18.277178   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:18.277187   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:18.277195   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:18.286234   72782 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1205 20:04:18.286295   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:18.286316   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:18.286337   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:18.286365   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:18.286380   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:18.286388   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:18 GMT
	I1205 20:04:18.286394   72782 round_trippers.go:580]     Audit-Id: 87c252ff-0559-45f8-b023-7dd046ba746b
	I1205 20:04:18.286537   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:18.286955   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:18.777019   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:18.777039   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:18.777048   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:18.777056   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:18.779477   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:18.779500   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:18.779507   72782 round_trippers.go:580]     Audit-Id: e50d6be8-2d02-4404-b54d-9aadf065cbaa
	I1205 20:04:18.779514   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:18.779520   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:18.779527   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:18.779533   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:18.779546   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:18 GMT
	I1205 20:04:18.779849   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:19.276461   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:19.276524   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:19.276549   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:19.276567   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:19.279407   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:19.279432   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:19.279442   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:19 GMT
	I1205 20:04:19.279449   72782 round_trippers.go:580]     Audit-Id: 36448882-1407-4075-978b-9d89834afc5f
	I1205 20:04:19.279455   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:19.279461   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:19.279467   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:19.279476   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:19.279624   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:19.776470   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:19.776502   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:19.776512   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:19.776519   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:19.778967   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:19.778987   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:19.778996   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:19.779002   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:19.779008   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:19 GMT
	I1205 20:04:19.779015   72782 round_trippers.go:580]     Audit-Id: e21353dc-fca9-4e88-922f-b676b5384e24
	I1205 20:04:19.779021   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:19.779045   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:19.779218   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:20.276483   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:20.276503   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:20.276512   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:20.276519   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:20.280341   72782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:20.280366   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:20.280375   72782 round_trippers.go:580]     Audit-Id: 2c4b83b7-cab0-4e41-a266-fcc46739aa6a
	I1205 20:04:20.280382   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:20.280388   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:20.280395   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:20.280401   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:20.280409   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:20 GMT
	I1205 20:04:20.280608   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:20.777168   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:20.777196   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:20.777206   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:20.777213   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:20.779814   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:20.779837   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:20.779846   72782 round_trippers.go:580]     Audit-Id: 1477d332-e9f8-4780-93f0-6a801406ab0b
	I1205 20:04:20.779853   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:20.779860   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:20.779866   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:20.779873   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:20.779885   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:20 GMT
	I1205 20:04:20.780015   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:20.780426   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:21.276961   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:21.276985   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:21.276995   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:21.277002   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:21.281014   72782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:21.281039   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:21.281048   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:21 GMT
	I1205 20:04:21.281055   72782 round_trippers.go:580]     Audit-Id: 73375d6b-4cbd-4c3f-975a-470f76016be9
	I1205 20:04:21.281061   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:21.281068   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:21.281076   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:21.281083   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:21.281609   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:21.776437   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:21.776461   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:21.776470   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:21.776477   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:21.779105   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:21.779120   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:21.779128   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:21.779135   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:21.779141   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:21.779147   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:21 GMT
	I1205 20:04:21.779154   72782 round_trippers.go:580]     Audit-Id: 15475e06-1c85-4788-ae26-4dde7f351bee
	I1205 20:04:21.779161   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:21.779283   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:22.276939   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:22.276964   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:22.276974   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:22.276981   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:22.285363   72782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1205 20:04:22.285383   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:22.285393   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:22.285399   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:22 GMT
	I1205 20:04:22.285406   72782 round_trippers.go:580]     Audit-Id: d9cbbe52-3420-42c7-b594-c20d0f14fb44
	I1205 20:04:22.285412   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:22.285418   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:22.285424   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:22.285802   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:22.776789   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:22.776814   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:22.776825   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:22.776832   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:22.779317   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:22.779335   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:22.779343   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:22 GMT
	I1205 20:04:22.779350   72782 round_trippers.go:580]     Audit-Id: cf71abde-e3ac-46e1-86ab-5dc64fa3a5cc
	I1205 20:04:22.779356   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:22.779361   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:22.779367   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:22.779374   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:22.779506   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:23.276401   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:23.276424   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:23.276433   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:23.276442   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:23.279013   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:23.279033   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:23.279041   72782 round_trippers.go:580]     Audit-Id: 8d8d96de-5b1d-40f7-9a30-4c35cb59fda7
	I1205 20:04:23.279048   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:23.279054   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:23.279060   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:23.279066   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:23.279073   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:23 GMT
	I1205 20:04:23.279602   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:23.280011   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:23.776544   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:23.776565   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:23.776575   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:23.776582   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:23.779017   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:23.779034   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:23.779043   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:23.779050   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:23.779056   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:23 GMT
	I1205 20:04:23.779062   72782 round_trippers.go:580]     Audit-Id: 8c38121a-9627-4501-8a35-ac8b8371e278
	I1205 20:04:23.779069   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:23.779075   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:23.779235   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:24.277135   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:24.277160   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:24.277170   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:24.277181   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:24.280876   72782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:24.280895   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:24.280903   72782 round_trippers.go:580]     Audit-Id: 35c1047f-f5ca-4978-b8fe-352eff8d1568
	I1205 20:04:24.280909   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:24.280915   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:24.280921   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:24.280927   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:24.280934   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:24 GMT
	I1205 20:04:24.281275   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:24.777366   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:24.777389   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:24.777398   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:24.777405   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:24.779829   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:24.779850   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:24.779862   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:24.779869   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:24 GMT
	I1205 20:04:24.779875   72782 round_trippers.go:580]     Audit-Id: 5e6c0985-2ac6-4076-a113-750bdf8a3f37
	I1205 20:04:24.779882   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:24.779891   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:24.779899   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:24.780167   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:25.276453   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:25.276476   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:25.276486   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:25.276493   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:25.278803   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:25.278819   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:25.278828   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:25.278834   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:25.278840   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:25.278847   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:25.278853   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:25 GMT
	I1205 20:04:25.278866   72782 round_trippers.go:580]     Audit-Id: 1758ff79-6f7b-460b-b007-c193425ad504
	I1205 20:04:25.278999   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:25.777198   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:25.777221   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:25.777231   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:25.777238   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:25.779439   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:25.779458   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:25.779467   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:25 GMT
	I1205 20:04:25.779473   72782 round_trippers.go:580]     Audit-Id: 87f6ec30-f17e-403c-82e0-74f4d48a070f
	I1205 20:04:25.779479   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:25.779485   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:25.779495   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:25.779502   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:25.779607   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:25.780008   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:26.276461   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:26.276481   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:26.276490   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:26.276497   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:26.280898   72782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:04:26.280916   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:26.280924   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:26.280931   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:26 GMT
	I1205 20:04:26.280937   72782 round_trippers.go:580]     Audit-Id: 5138338a-b474-4e0e-840e-d03eb17372ac
	I1205 20:04:26.280943   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:26.280952   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:26.280958   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:26.281100   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:26.776980   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:26.777002   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:26.777011   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:26.777018   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:26.779377   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:26.779401   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:26.779410   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:26.779417   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:26.779423   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:26.779430   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:26 GMT
	I1205 20:04:26.779443   72782 round_trippers.go:580]     Audit-Id: 55376030-b57b-40b6-87c8-889b99be0976
	I1205 20:04:26.779450   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:26.779612   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:27.277264   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:27.277289   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:27.277298   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:27.277306   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:27.286056   72782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1205 20:04:27.286078   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:27.286086   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:27.286092   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:27.286098   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:27.286104   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:27.286111   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:27 GMT
	I1205 20:04:27.286117   72782 round_trippers.go:580]     Audit-Id: 3617a5fb-f5af-49fa-abf7-afe39f50ca11
	I1205 20:04:27.286269   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:27.777363   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:27.777385   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:27.777395   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:27.777402   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:27.779736   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:27.779776   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:27.779785   72782 round_trippers.go:580]     Audit-Id: 4e06e793-176c-4c67-94cf-61387eee0a71
	I1205 20:04:27.779791   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:27.779798   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:27.779804   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:27.779810   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:27.779816   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:27 GMT
	I1205 20:04:27.779937   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:27.780346   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:28.277079   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:28.277099   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:28.277108   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:28.277116   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:28.279401   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:28.279418   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:28.279426   72782 round_trippers.go:580]     Audit-Id: dc06c47c-fcf9-4a28-85e6-9f3168eab055
	I1205 20:04:28.279433   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:28.279440   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:28.279446   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:28.279452   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:28.279458   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:28 GMT
	I1205 20:04:28.279994   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:28.777057   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:28.777081   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:28.777091   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:28.777099   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:28.779502   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:28.779524   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:28.779533   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:28.779541   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:28.779547   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:28.779553   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:28.779559   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:28 GMT
	I1205 20:04:28.779570   72782 round_trippers.go:580]     Audit-Id: 4fd53383-0387-404c-9a08-e6223d374822
	I1205 20:04:28.779716   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:29.277334   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:29.277355   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:29.277365   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:29.277372   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:29.280161   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:29.280193   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:29.280202   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:29.280211   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:29 GMT
	I1205 20:04:29.280219   72782 round_trippers.go:580]     Audit-Id: d7635fa5-5c7a-4d55-980b-66a4f3834816
	I1205 20:04:29.280225   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:29.280231   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:29.280237   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:29.280496   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:29.776480   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:29.776502   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:29.776511   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:29.776519   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:29.778877   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:29.778898   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:29.778906   72782 round_trippers.go:580]     Audit-Id: 2f7e4fb7-6b0c-451e-b53b-9d364654bc14
	I1205 20:04:29.778914   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:29.778920   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:29.778927   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:29.778934   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:29.778944   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:29 GMT
	I1205 20:04:29.779225   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:30.276480   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:30.276503   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:30.276520   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:30.276528   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:30.287617   72782 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1205 20:04:30.287639   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:30.287651   72782 round_trippers.go:580]     Audit-Id: 2e8a9980-151a-4c76-9fa7-fc8d9f54e236
	I1205 20:04:30.287657   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:30.287664   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:30.287670   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:30.287677   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:30.287683   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:30 GMT
	I1205 20:04:30.287895   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:30.288292   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:30.777250   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:30.777271   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:30.777280   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:30.777287   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:30.779672   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:30.779699   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:30.779707   72782 round_trippers.go:580]     Audit-Id: 4a22447d-cdf7-4815-80a6-c591f9b1cecd
	I1205 20:04:30.779714   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:30.779720   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:30.779726   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:30.779732   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:30.779738   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:30 GMT
	I1205 20:04:30.779879   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:31.277167   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:31.277187   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:31.277196   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:31.277203   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:31.279784   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:31.279806   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:31.279814   72782 round_trippers.go:580]     Audit-Id: 862d98d3-44e4-497b-91ca-a65f9021cdbd
	I1205 20:04:31.279820   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:31.279826   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:31.279832   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:31.279839   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:31.279845   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:31 GMT
	I1205 20:04:31.280365   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:31.776388   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:31.776412   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:31.776422   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:31.776429   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:31.778823   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:31.778843   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:31.778850   72782 round_trippers.go:580]     Audit-Id: c141e1a2-c038-4fc4-8bfc-f716e6d9ebee
	I1205 20:04:31.778857   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:31.778863   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:31.778869   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:31.778875   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:31.778881   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:31 GMT
	I1205 20:04:31.778985   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:32.276927   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:32.276950   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:32.276960   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:32.276967   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:32.281954   72782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:04:32.281973   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:32.281982   72782 round_trippers.go:580]     Audit-Id: 32b14367-cbdb-41c1-b9d5-9d0e461cc035
	I1205 20:04:32.281989   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:32.281995   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:32.282001   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:32.282008   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:32.282028   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:32 GMT
	I1205 20:04:32.282603   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:32.776718   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:32.776737   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:32.776747   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:32.776754   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:32.779077   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:32.779097   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:32.779105   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:32.779111   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:32.779118   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:32 GMT
	I1205 20:04:32.779125   72782 round_trippers.go:580]     Audit-Id: e2bbeb2f-a3bb-4c0f-8fec-c5b2e88dd945
	I1205 20:04:32.779131   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:32.779137   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:32.779278   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:32.779674   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:33.276470   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:33.276497   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:33.276507   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:33.276514   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:33.279555   72782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:33.279575   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:33.279583   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:33.279590   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:33.279596   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:33.279602   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:33 GMT
	I1205 20:04:33.279609   72782 round_trippers.go:580]     Audit-Id: 61664fc4-b2b7-4a6d-9d62-02cfdb30db2a
	I1205 20:04:33.279618   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:33.280336   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:33.777422   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:33.777447   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:33.777457   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:33.777474   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:33.779934   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:33.779953   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:33.779962   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:33.779968   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:33.779975   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:33.779981   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:33 GMT
	I1205 20:04:33.779987   72782 round_trippers.go:580]     Audit-Id: 4eac7a20-5866-49f9-8d5a-07e75ca79a7d
	I1205 20:04:33.779993   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:33.780116   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:34.277110   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:34.277137   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:34.277148   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:34.277162   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:34.285787   72782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1205 20:04:34.285810   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:34.285819   72782 round_trippers.go:580]     Audit-Id: c855842d-fddf-4226-b789-8e14645c2ce8
	I1205 20:04:34.285825   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:34.285832   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:34.285838   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:34.285844   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:34.285851   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:34 GMT
	I1205 20:04:34.285986   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:34.776443   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:34.776468   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:34.776478   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:34.776486   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:34.778868   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:34.778885   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:34.778894   72782 round_trippers.go:580]     Audit-Id: 7a68a311-215e-4565-96e3-deb138f3afc0
	I1205 20:04:34.778901   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:34.778907   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:34.778913   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:34.778919   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:34.778925   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:34 GMT
	I1205 20:04:34.779050   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:35.277828   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:35.277849   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:35.277858   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:35.277866   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:35.285611   72782 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 20:04:35.285639   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:35.285648   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:35.285656   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:35.285663   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:35 GMT
	I1205 20:04:35.285669   72782 round_trippers.go:580]     Audit-Id: 702d02a4-d2da-4e47-afa2-ec0ad67faf8b
	I1205 20:04:35.285675   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:35.285682   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:35.285824   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:35.286232   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:35.776546   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:35.776570   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:35.776579   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:35.776587   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:35.778926   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:35.778944   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:35.778952   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:35.778959   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:35.778965   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:35 GMT
	I1205 20:04:35.778972   72782 round_trippers.go:580]     Audit-Id: 40b424a2-378d-4f59-b872-f9b992911b08
	I1205 20:04:35.778978   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:35.778984   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:35.779093   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:36.276526   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:36.276551   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:36.276561   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:36.276569   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:36.279497   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:36.279517   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:36.279525   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:36.279532   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:36.279538   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:36.279545   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:36.279552   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:36 GMT
	I1205 20:04:36.279572   72782 round_trippers.go:580]     Audit-Id: 6ca721ec-ae3b-4829-81ca-7f406f70d202
	I1205 20:04:36.280114   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:36.777222   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:36.777245   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:36.777255   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:36.777262   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:36.779697   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:36.779718   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:36.779726   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:36.779732   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:36.779739   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:36 GMT
	I1205 20:04:36.779746   72782 round_trippers.go:580]     Audit-Id: 99cb1fd1-f6ed-4c4b-a6fe-060ac02f29b3
	I1205 20:04:36.779768   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:36.779775   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:36.780111   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:37.277126   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:37.277147   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:37.277156   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:37.277164   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:37.285807   72782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1205 20:04:37.285829   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:37.285837   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:37.285844   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:37.285850   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:37.285857   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:37.285865   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:37 GMT
	I1205 20:04:37.285871   72782 round_trippers.go:580]     Audit-Id: 9bfd9a38-98be-421a-87e3-b6ebc1f43172
	I1205 20:04:37.286055   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:37.286469   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:37.776874   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:37.776897   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:37.776906   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:37.776913   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:37.779318   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:37.779338   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:37.779346   72782 round_trippers.go:580]     Audit-Id: 7cdd8443-d5c3-484f-aa8d-7c7ffa0c63e1
	I1205 20:04:37.779353   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:37.779359   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:37.779366   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:37.779372   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:37.779378   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:37 GMT
	I1205 20:04:37.779510   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:38.276442   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:38.276476   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:38.276486   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:38.276493   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:38.278966   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:38.278985   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:38.278994   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:38.279001   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:38 GMT
	I1205 20:04:38.279022   72782 round_trippers.go:580]     Audit-Id: e68b8a41-0a51-40bf-b2df-5510c083980e
	I1205 20:04:38.279029   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:38.279035   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:38.279041   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:38.279450   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:38.776499   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:38.776521   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:38.776539   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:38.776547   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:38.779114   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:38.779136   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:38.779148   72782 round_trippers.go:580]     Audit-Id: 74177a70-da95-456a-a342-48837fe53fbe
	I1205 20:04:38.779155   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:38.779161   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:38.779167   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:38.779173   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:38.779180   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:38 GMT
	I1205 20:04:38.779295   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:39.276864   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:39.276895   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:39.276905   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:39.276913   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:39.285409   72782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1205 20:04:39.285430   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:39.285439   72782 round_trippers.go:580]     Audit-Id: f4d5bc79-0a29-42a7-82e5-4d9e2c2b7e9d
	I1205 20:04:39.285446   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:39.285452   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:39.285459   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:39.285465   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:39.285472   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:39 GMT
	I1205 20:04:39.285623   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:39.776480   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:39.776501   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:39.776510   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:39.776518   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:39.778880   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:39.778897   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:39.778910   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:39.778916   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:39 GMT
	I1205 20:04:39.778923   72782 round_trippers.go:580]     Audit-Id: b130892c-d867-42b4-bd7e-10944c205940
	I1205 20:04:39.778928   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:39.778935   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:39.778941   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:39.779152   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:39.779563   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:40.276463   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:40.276481   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:40.276491   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:40.276498   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:40.285501   72782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1205 20:04:40.285522   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:40.285531   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:40.285537   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:40.285544   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:40.285550   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:40 GMT
	I1205 20:04:40.285556   72782 round_trippers.go:580]     Audit-Id: 1c681752-77b2-4ce5-8569-d26e64f3046e
	I1205 20:04:40.285562   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:40.285827   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:40.777092   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:40.777116   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:40.777125   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:40.777133   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:40.779446   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:40.779463   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:40.779471   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:40.779478   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:40 GMT
	I1205 20:04:40.779484   72782 round_trippers.go:580]     Audit-Id: 9e4ee3cb-60f9-43a3-8856-05c6bec7a24b
	I1205 20:04:40.779490   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:40.779498   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:40.779504   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:40.779660   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:41.276396   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:41.276417   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:41.276426   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:41.276433   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:41.278775   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:41.278792   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:41.278801   72782 round_trippers.go:580]     Audit-Id: ddb005f0-f810-4e25-9340-d070997217ad
	I1205 20:04:41.278807   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:41.278818   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:41.278831   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:41.278837   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:41.278851   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:41 GMT
	I1205 20:04:41.279257   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:41.777347   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:41.777367   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:41.777377   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:41.777384   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:41.779506   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:41.779524   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:41.779532   72782 round_trippers.go:580]     Audit-Id: 0a0eac59-49a6-4eaa-870c-063adb094c7f
	I1205 20:04:41.779538   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:41.779544   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:41.779550   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:41.779556   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:41.779563   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:41 GMT
	I1205 20:04:41.779720   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:41.780149   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:42.277062   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:42.277089   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:42.277099   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:42.277106   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:42.283676   72782 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:04:42.283704   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:42.283713   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:42.283720   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:42.283726   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:42.283732   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:42.283739   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:42 GMT
	I1205 20:04:42.283749   72782 round_trippers.go:580]     Audit-Id: ab89dc39-c006-41f4-a78c-0e231cbd593a
	I1205 20:04:42.284354   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:42.777441   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:42.777463   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:42.777472   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:42.777479   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:42.779924   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:42.779942   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:42.779956   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:42.779964   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:42.779971   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:42 GMT
	I1205 20:04:42.779978   72782 round_trippers.go:580]     Audit-Id: ad9df1ee-281f-48c0-a11b-05a9ab3f1002
	I1205 20:04:42.779984   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:42.779996   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:42.780347   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:43.276425   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:43.276448   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:43.276457   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:43.276465   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:43.278847   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:43.278867   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:43.278875   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:43.278882   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:43 GMT
	I1205 20:04:43.278888   72782 round_trippers.go:580]     Audit-Id: 304465c9-fc65-4d7d-bc86-9c3d3d1cc686
	I1205 20:04:43.278894   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:43.278900   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:43.278906   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:43.279300   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:43.777100   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:43.777124   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:43.777134   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:43.777147   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:43.779610   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:43.779635   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:43.779644   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:43.779653   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:43 GMT
	I1205 20:04:43.779659   72782 round_trippers.go:580]     Audit-Id: 6208f95a-9db2-43c0-acc6-f1cceeb46e53
	I1205 20:04:43.779665   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:43.779672   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:43.779684   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:43.779951   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:43.780369   72782 node_ready.go:58] node "multinode-930892" has status "Ready":"False"
	I1205 20:04:44.277138   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:44.277157   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:44.277167   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:44.277174   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:44.286452   72782 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1205 20:04:44.286476   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:44.286485   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:44.286491   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:44.286497   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:44.286504   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:44 GMT
	I1205 20:04:44.286510   72782 round_trippers.go:580]     Audit-Id: ec705dd9-78cf-42a3-8e08-e384eb491f7a
	I1205 20:04:44.286520   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:44.286646   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:44.777306   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:44.777331   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:44.777342   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:44.777349   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:44.779732   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:44.779768   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:44.779777   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:44.779784   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:44 GMT
	I1205 20:04:44.779790   72782 round_trippers.go:580]     Audit-Id: 04bcc27a-1625-4d1d-9ebf-4a3b07354985
	I1205 20:04:44.779796   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:44.779816   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:44.779830   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:44.779958   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:45.276959   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:45.276981   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:45.276991   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:45.276999   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:45.280147   72782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:45.280172   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:45.280183   72782 round_trippers.go:580]     Audit-Id: 08eb9289-2cbf-4497-9e1d-59addd969e04
	I1205 20:04:45.280190   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:45.280197   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:45.280203   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:45.280211   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:45.280217   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:45 GMT
	I1205 20:04:45.280471   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"352","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1205 20:04:45.776970   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:45.776992   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:45.777001   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:45.777008   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:45.779361   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:45.779377   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:45.779385   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:45 GMT
	I1205 20:04:45.779392   72782 round_trippers.go:580]     Audit-Id: fa8f96ea-81c2-4173-8dac-9b971654d2b0
	I1205 20:04:45.779399   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:45.779405   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:45.779419   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:45.779426   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:45.780440   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:04:45.780934   72782 node_ready.go:49] node "multinode-930892" has status "Ready":"True"
	I1205 20:04:45.780980   72782 node_ready.go:38] duration metric: took 31.533808062s waiting for node "multinode-930892" to be "Ready" ...
	I1205 20:04:45.781004   72782 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:04:45.781095   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1205 20:04:45.781132   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:45.781153   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:45.781174   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:45.785796   72782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:04:45.785816   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:45.785825   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:45.785832   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:45 GMT
	I1205 20:04:45.785838   72782 round_trippers.go:580]     Audit-Id: 69301ce8-eb59-4a51-a427-cfea5491e891
	I1205 20:04:45.785844   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:45.785851   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:45.785857   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:45.786434   72782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"coredns-5dd5756b68-jg6xb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"68a13ae5-1cba-4475-b33a-8090d3001eae","resourceVersion":"443","creationTimestamp":"2023-12-05T20:04:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9044bb53-e854-441b-a046-ca23be2eacc5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9044bb53-e854-441b-a046-ca23be2eacc5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1205 20:04:45.790713   72782 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jg6xb" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:45.790853   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jg6xb
	I1205 20:04:45.790892   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:45.790913   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:45.790934   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:45.793839   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:45.793887   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:45.793917   72782 round_trippers.go:580]     Audit-Id: b0e0529b-b9f3-44e9-b77e-d910c8a383d8
	I1205 20:04:45.793938   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:45.793958   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:45.793977   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:45.794013   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:45.794034   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:45 GMT
	I1205 20:04:45.794362   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-jg6xb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"68a13ae5-1cba-4475-b33a-8090d3001eae","resourceVersion":"443","creationTimestamp":"2023-12-05T20:04:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9044bb53-e854-441b-a046-ca23be2eacc5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9044bb53-e854-441b-a046-ca23be2eacc5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1205 20:04:45.795166   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:45.795216   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:45.795239   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:45.795259   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:45.812741   72782 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1205 20:04:45.816795   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:45.816823   72782 round_trippers.go:580]     Audit-Id: a02596ca-5611-444f-8be4-49496c9ca111
	I1205 20:04:45.816845   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:45.816872   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:45.816894   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:45.816913   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:45.816933   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:45 GMT
	I1205 20:04:45.817095   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:04:45.817579   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jg6xb
	I1205 20:04:45.817598   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:45.817606   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:45.817614   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:45.822343   72782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:04:45.822365   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:45.822373   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:45.822379   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:45.822386   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:45 GMT
	I1205 20:04:45.822392   72782 round_trippers.go:580]     Audit-Id: 59b0a78a-2459-4563-8070-fe60aeff6a82
	I1205 20:04:45.822399   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:45.822405   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:45.822523   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-jg6xb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"68a13ae5-1cba-4475-b33a-8090d3001eae","resourceVersion":"443","creationTimestamp":"2023-12-05T20:04:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9044bb53-e854-441b-a046-ca23be2eacc5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9044bb53-e854-441b-a046-ca23be2eacc5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1205 20:04:45.823046   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:45.823062   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:45.823070   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:45.823078   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:45.825917   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:45.825942   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:45.825950   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:45 GMT
	I1205 20:04:45.825957   72782 round_trippers.go:580]     Audit-Id: 0e7653e7-d9fa-4ab7-a070-86dce0a6e0b7
	I1205 20:04:45.825963   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:45.825970   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:45.825976   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:45.825982   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:45.826111   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:04:46.326691   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jg6xb
	I1205 20:04:46.326713   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:46.326722   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:46.326730   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:46.329177   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:46.329228   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:46.329251   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:46 GMT
	I1205 20:04:46.329272   72782 round_trippers.go:580]     Audit-Id: 11afd0f3-cef8-4385-883c-2b1b0953a714
	I1205 20:04:46.329305   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:46.329326   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:46.329345   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:46.329360   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:46.329735   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-jg6xb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"68a13ae5-1cba-4475-b33a-8090d3001eae","resourceVersion":"456","creationTimestamp":"2023-12-05T20:04:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9044bb53-e854-441b-a046-ca23be2eacc5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9044bb53-e854-441b-a046-ca23be2eacc5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1205 20:04:46.330261   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:46.330279   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:46.330287   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:46.330295   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:46.332327   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:46.332347   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:46.332354   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:46.332361   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:46.332367   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:46 GMT
	I1205 20:04:46.332373   72782 round_trippers.go:580]     Audit-Id: 2646e460-20ad-491d-bb3a-40093d174e00
	I1205 20:04:46.332380   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:46.332389   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:46.332675   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:04:46.333044   72782 pod_ready.go:92] pod "coredns-5dd5756b68-jg6xb" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:46.333062   72782 pod_ready.go:81] duration metric: took 542.282434ms waiting for pod "coredns-5dd5756b68-jg6xb" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:46.333085   72782 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:46.333138   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-930892
	I1205 20:04:46.333146   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:46.333153   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:46.333160   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:46.335058   72782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:04:46.335077   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:46.335084   72782 round_trippers.go:580]     Audit-Id: 53b57869-89ce-4c7d-acc6-a344ad2d7218
	I1205 20:04:46.335091   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:46.335097   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:46.335103   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:46.335109   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:46.335119   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:46 GMT
	I1205 20:04:46.335280   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-930892","namespace":"kube-system","uid":"610946f2-2a5c-4e9c-8bee-127cca42502c","resourceVersion":"424","creationTimestamp":"2023-12-05T20:04:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"3abdfcff53e2d32d8b1b2cebb83c49c3","kubernetes.io/config.mirror":"3abdfcff53e2d32d8b1b2cebb83c49c3","kubernetes.io/config.seen":"2023-12-05T20:04:00.077941695Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1205 20:04:46.335696   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:46.335712   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:46.335719   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:46.335727   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:46.337645   72782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:04:46.337664   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:46.337672   72782 round_trippers.go:580]     Audit-Id: 4e872f37-9d80-41c6-8479-2bcd6b43149f
	I1205 20:04:46.337678   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:46.337685   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:46.337691   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:46.337701   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:46.337710   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:46 GMT
	I1205 20:04:46.338013   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:04:46.338423   72782 pod_ready.go:92] pod "etcd-multinode-930892" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:46.338441   72782 pod_ready.go:81] duration metric: took 5.348454ms waiting for pod "etcd-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:46.338455   72782 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:46.338503   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-930892
	I1205 20:04:46.338514   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:46.338522   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:46.338530   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:46.340686   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:46.340756   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:46.340769   72782 round_trippers.go:580]     Audit-Id: ca723265-5488-412f-a44a-11390b610205
	I1205 20:04:46.340776   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:46.340782   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:46.340801   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:46.340812   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:46.340819   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:46 GMT
	I1205 20:04:46.340971   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-930892","namespace":"kube-system","uid":"ff4b2f9f-04b3-4c77-abdd-ed293fe3336d","resourceVersion":"425","creationTimestamp":"2023-12-05T20:04:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b526022c473bec524d839dcb362d3da6","kubernetes.io/config.mirror":"b526022c473bec524d839dcb362d3da6","kubernetes.io/config.seen":"2023-12-05T20:04:00.077933424Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1205 20:04:46.341452   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:46.341468   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:46.341476   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:46.341485   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:46.343381   72782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:04:46.343400   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:46.343407   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:46.343414   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:46.343420   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:46.343427   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:46.343434   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:46 GMT
	I1205 20:04:46.343440   72782 round_trippers.go:580]     Audit-Id: 26f1e0da-40c3-49a0-9630-f5c7a5dccad8
	I1205 20:04:46.343543   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:04:46.343955   72782 pod_ready.go:92] pod "kube-apiserver-multinode-930892" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:46.343972   72782 pod_ready.go:81] duration metric: took 5.510721ms waiting for pod "kube-apiserver-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:46.343982   72782 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:46.377261   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-930892
	I1205 20:04:46.377282   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:46.377291   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:46.377299   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:46.379719   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:46.379789   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:46.379812   72782 round_trippers.go:580]     Audit-Id: 7ba9efcb-2de9-4ef2-9d62-0f9273807bd3
	I1205 20:04:46.379833   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:46.379863   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:46.379886   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:46.379905   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:46.379925   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:46 GMT
	I1205 20:04:46.380162   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-930892","namespace":"kube-system","uid":"bf7a9066-c8ab-4c6e-b0cd-970b69612e10","resourceVersion":"426","creationTimestamp":"2023-12-05T20:04:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"140869da40b493f6e05a96a4f7fbfe02","kubernetes.io/config.mirror":"140869da40b493f6e05a96a4f7fbfe02","kubernetes.io/config.seen":"2023-12-05T20:04:00.077939233Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1205 20:04:46.576970   72782 request.go:629] Waited for 196.261104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:46.577090   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:46.577099   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:46.577108   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:46.577116   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:46.579576   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:46.579598   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:46.579606   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:46.579612   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:46.579619   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:46.579625   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:46 GMT
	I1205 20:04:46.579632   72782 round_trippers.go:580]     Audit-Id: 905f671b-614e-4686-aecc-676d387de39d
	I1205 20:04:46.579638   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:46.579965   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:04:46.580383   72782 pod_ready.go:92] pod "kube-controller-manager-multinode-930892" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:46.580399   72782 pod_ready.go:81] duration metric: took 236.410531ms waiting for pod "kube-controller-manager-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:46.580412   72782 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-skbnx" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:46.777781   72782 request.go:629] Waited for 197.308916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-skbnx
	I1205 20:04:46.777866   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-skbnx
	I1205 20:04:46.777877   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:46.777885   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:46.777893   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:46.780485   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:46.780585   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:46.780616   72782 round_trippers.go:580]     Audit-Id: c6fa12d7-a136-4d7f-8ea3-0608f746c0e8
	I1205 20:04:46.780637   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:46.780649   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:46.780656   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:46.780666   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:46.780672   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:46 GMT
	I1205 20:04:46.780802   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-skbnx","generateName":"kube-proxy-","namespace":"kube-system","uid":"18565024-772b-429b-8d9b-77a81590210e","resourceVersion":"420","creationTimestamp":"2023-12-05T20:04:13Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5db642e7-1b4a-4211-a43b-b4b188b9f76b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5db642e7-1b4a-4211-a43b-b4b188b9f76b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1205 20:04:46.977622   72782 request.go:629] Waited for 196.343714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:46.977701   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:46.977727   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:46.977739   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:46.977749   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:46.980227   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:46.980249   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:46.980257   72782 round_trippers.go:580]     Audit-Id: e9e40077-ad97-4ad8-b662-5ae2778b6443
	I1205 20:04:46.980264   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:46.980287   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:46.980300   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:46.980307   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:46.980318   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:46 GMT
	I1205 20:04:46.980521   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:04:46.980961   72782 pod_ready.go:92] pod "kube-proxy-skbnx" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:46.980978   72782 pod_ready.go:81] duration metric: took 400.557776ms waiting for pod "kube-proxy-skbnx" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:46.980989   72782 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:47.177362   72782 request.go:629] Waited for 196.313921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-930892
	I1205 20:04:47.177476   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-930892
	I1205 20:04:47.177488   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:47.177515   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:47.177523   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:47.179903   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:47.179940   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:47.179948   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:47.179954   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:47.179963   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:47.179973   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:47 GMT
	I1205 20:04:47.179980   72782 round_trippers.go:580]     Audit-Id: dcf353a2-de4a-4ac4-92aa-8ceb1bf7d49c
	I1205 20:04:47.179990   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:47.180150   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-930892","namespace":"kube-system","uid":"9e837e17-e45a-4631-92ba-602746f09a15","resourceVersion":"427","creationTimestamp":"2023-12-05T20:04:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fe34a8451b0f5ac84df3ae08c2adbedb","kubernetes.io/config.mirror":"fe34a8451b0f5ac84df3ae08c2adbedb","kubernetes.io/config.seen":"2023-12-05T20:04:00.077940382Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1205 20:04:47.377953   72782 request.go:629] Waited for 197.33354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:47.378035   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:04:47.378044   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:47.378061   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:47.378073   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:47.380527   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:47.380547   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:47.380555   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:47.380561   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:47 GMT
	I1205 20:04:47.380567   72782 round_trippers.go:580]     Audit-Id: 0cea64d0-66e1-4a88-898f-a5075a7d433a
	I1205 20:04:47.380574   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:47.380580   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:47.380588   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:47.380810   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:04:47.381227   72782 pod_ready.go:92] pod "kube-scheduler-multinode-930892" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:47.381246   72782 pod_ready.go:81] duration metric: took 400.249138ms waiting for pod "kube-scheduler-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:47.381258   72782 pod_ready.go:38] duration metric: took 1.600223168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:04:47.381275   72782 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:04:47.381364   72782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:04:47.392409   72782 command_runner.go:130] > 1283
	I1205 20:04:47.393551   72782 api_server.go:72] duration metric: took 33.201100762s to wait for apiserver process to appear ...
	I1205 20:04:47.393566   72782 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:04:47.393583   72782 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1205 20:04:47.402163   72782 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1205 20:04:47.402231   72782 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1205 20:04:47.402242   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:47.402251   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:47.402258   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:47.403377   72782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:04:47.403398   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:47.403406   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:47.403412   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:47.403419   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:47.403427   72782 round_trippers.go:580]     Content-Length: 264
	I1205 20:04:47.403438   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:47 GMT
	I1205 20:04:47.403450   72782 round_trippers.go:580]     Audit-Id: 6e26219a-38ed-4142-acc1-de7e7e72a3ec
	I1205 20:04:47.403457   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:47.403474   72782 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1205 20:04:47.403562   72782 api_server.go:141] control plane version: v1.28.4
	I1205 20:04:47.403580   72782 api_server.go:131] duration metric: took 10.008091ms to wait for apiserver health ...
	I1205 20:04:47.403588   72782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:04:47.577962   72782 request.go:629] Waited for 174.315936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1205 20:04:47.578054   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1205 20:04:47.578066   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:47.578074   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:47.578082   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:47.581352   72782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:47.581375   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:47.581383   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:47.581390   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:47 GMT
	I1205 20:04:47.581399   72782 round_trippers.go:580]     Audit-Id: 536f732b-e78c-4699-b2e2-7209b88cff26
	I1205 20:04:47.581406   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:47.581412   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:47.581422   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:47.581805   72782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"coredns-5dd5756b68-jg6xb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"68a13ae5-1cba-4475-b33a-8090d3001eae","resourceVersion":"456","creationTimestamp":"2023-12-05T20:04:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9044bb53-e854-441b-a046-ca23be2eacc5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9044bb53-e854-441b-a046-ca23be2eacc5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1205 20:04:47.584176   72782 system_pods.go:59] 8 kube-system pods found
	I1205 20:04:47.584205   72782 system_pods.go:61] "coredns-5dd5756b68-jg6xb" [68a13ae5-1cba-4475-b33a-8090d3001eae] Running
	I1205 20:04:47.584211   72782 system_pods.go:61] "etcd-multinode-930892" [610946f2-2a5c-4e9c-8bee-127cca42502c] Running
	I1205 20:04:47.584216   72782 system_pods.go:61] "kindnet-xtm24" [8c6bc758-aa3f-4204-98bb-68c004cdc2a8] Running
	I1205 20:04:47.584221   72782 system_pods.go:61] "kube-apiserver-multinode-930892" [ff4b2f9f-04b3-4c77-abdd-ed293fe3336d] Running
	I1205 20:04:47.584228   72782 system_pods.go:61] "kube-controller-manager-multinode-930892" [bf7a9066-c8ab-4c6e-b0cd-970b69612e10] Running
	I1205 20:04:47.584233   72782 system_pods.go:61] "kube-proxy-skbnx" [18565024-772b-429b-8d9b-77a81590210e] Running
	I1205 20:04:47.584244   72782 system_pods.go:61] "kube-scheduler-multinode-930892" [9e837e17-e45a-4631-92ba-602746f09a15] Running
	I1205 20:04:47.584249   72782 system_pods.go:61] "storage-provisioner" [0177b9f4-828e-4903-acc7-d50fee28986c] Running
	I1205 20:04:47.584258   72782 system_pods.go:74] duration metric: took 180.664105ms to wait for pod list to return data ...
	I1205 20:04:47.584277   72782 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:04:47.777666   72782 request.go:629] Waited for 193.323543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:04:47.777748   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:04:47.777759   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:47.777768   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:47.777775   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:47.780201   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:47.780222   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:47.780231   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:47.780238   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:47.780261   72782 round_trippers.go:580]     Content-Length: 261
	I1205 20:04:47.780274   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:47 GMT
	I1205 20:04:47.780281   72782 round_trippers.go:580]     Audit-Id: 50b280a4-3edd-4b42-851b-87b9c374afc3
	I1205 20:04:47.780287   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:47.780296   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:47.780318   72782 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1e1dd7a8-7639-4082-ae0a-44be6d1fca87","resourceVersion":"355","creationTimestamp":"2023-12-05T20:04:13Z"}}]}
	I1205 20:04:47.780517   72782 default_sa.go:45] found service account: "default"
	I1205 20:04:47.780534   72782 default_sa.go:55] duration metric: took 196.250478ms for default service account to be created ...
	I1205 20:04:47.780543   72782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:04:47.977858   72782 request.go:629] Waited for 197.256601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1205 20:04:47.977951   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1205 20:04:47.977973   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:47.977983   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:47.977995   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:47.981704   72782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:47.981771   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:47.981787   72782 round_trippers.go:580]     Audit-Id: 15a51e7f-8c67-4129-a1d2-13f3140ab9e1
	I1205 20:04:47.981794   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:47.981801   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:47.981807   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:47.981814   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:47.981820   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:47 GMT
	I1205 20:04:47.982298   72782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"coredns-5dd5756b68-jg6xb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"68a13ae5-1cba-4475-b33a-8090d3001eae","resourceVersion":"456","creationTimestamp":"2023-12-05T20:04:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9044bb53-e854-441b-a046-ca23be2eacc5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9044bb53-e854-441b-a046-ca23be2eacc5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1205 20:04:47.984733   72782 system_pods.go:86] 8 kube-system pods found
	I1205 20:04:47.984763   72782 system_pods.go:89] "coredns-5dd5756b68-jg6xb" [68a13ae5-1cba-4475-b33a-8090d3001eae] Running
	I1205 20:04:47.984771   72782 system_pods.go:89] "etcd-multinode-930892" [610946f2-2a5c-4e9c-8bee-127cca42502c] Running
	I1205 20:04:47.984777   72782 system_pods.go:89] "kindnet-xtm24" [8c6bc758-aa3f-4204-98bb-68c004cdc2a8] Running
	I1205 20:04:47.984783   72782 system_pods.go:89] "kube-apiserver-multinode-930892" [ff4b2f9f-04b3-4c77-abdd-ed293fe3336d] Running
	I1205 20:04:47.984793   72782 system_pods.go:89] "kube-controller-manager-multinode-930892" [bf7a9066-c8ab-4c6e-b0cd-970b69612e10] Running
	I1205 20:04:47.984800   72782 system_pods.go:89] "kube-proxy-skbnx" [18565024-772b-429b-8d9b-77a81590210e] Running
	I1205 20:04:47.984807   72782 system_pods.go:89] "kube-scheduler-multinode-930892" [9e837e17-e45a-4631-92ba-602746f09a15] Running
	I1205 20:04:47.984815   72782 system_pods.go:89] "storage-provisioner" [0177b9f4-828e-4903-acc7-d50fee28986c] Running
	I1205 20:04:47.984821   72782 system_pods.go:126] duration metric: took 204.273353ms to wait for k8s-apps to be running ...
	I1205 20:04:47.984829   72782 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:04:47.984886   72782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:04:47.998102   72782 system_svc.go:56] duration metric: took 13.26293ms WaitForService to wait for kubelet.
	I1205 20:04:47.998170   72782 kubeadm.go:581] duration metric: took 33.805723365s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:04:47.998197   72782 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:04:48.177592   72782 request.go:629] Waited for 179.312536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1205 20:04:48.177668   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1205 20:04:48.177682   72782 round_trippers.go:469] Request Headers:
	I1205 20:04:48.177691   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:48.177700   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:04:48.180230   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:48.180254   72782 round_trippers.go:577] Response Headers:
	I1205 20:04:48.180263   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:48.180270   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:04:48.180296   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:04:48.180307   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:48 GMT
	I1205 20:04:48.180314   72782 round_trippers.go:580]     Audit-Id: bde0e962-c07b-4aa9-a4e8-f3b806ba74f2
	I1205 20:04:48.180323   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:48.180433   72782 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1205 20:04:48.180871   72782 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 20:04:48.180900   72782 node_conditions.go:123] node cpu capacity is 2
	I1205 20:04:48.180912   72782 node_conditions.go:105] duration metric: took 182.70999ms to run NodePressure ...
	I1205 20:04:48.180930   72782 start.go:228] waiting for startup goroutines ...
	I1205 20:04:48.180939   72782 start.go:233] waiting for cluster config update ...
	I1205 20:04:48.180949   72782 start.go:242] writing updated cluster config ...
	I1205 20:04:48.184269   72782 out.go:177] 
	I1205 20:04:48.186181   72782 config.go:182] Loaded profile config "multinode-930892": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:04:48.186269   72782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/config.json ...
	I1205 20:04:48.188406   72782 out.go:177] * Starting worker node multinode-930892-m02 in cluster multinode-930892
	I1205 20:04:48.190325   72782 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 20:04:48.192445   72782 out.go:177] * Pulling base image ...
	I1205 20:04:48.194224   72782 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:04:48.194243   72782 cache.go:56] Caching tarball of preloaded images
	I1205 20:04:48.194293   72782 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 20:04:48.194342   72782 preload.go:174] Found /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1205 20:04:48.194359   72782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:04:48.194449   72782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/config.json ...
	I1205 20:04:48.212453   72782 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon, skipping pull
	I1205 20:04:48.212478   72782 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in daemon, skipping load
	I1205 20:04:48.212500   72782 cache.go:194] Successfully downloaded all kic artifacts
	I1205 20:04:48.212528   72782 start.go:365] acquiring machines lock for multinode-930892-m02: {Name:mk617624863f990b36ee103370b9d33b76872a82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:04:48.212643   72782 start.go:369] acquired machines lock for "multinode-930892-m02" in 89.388µs
	I1205 20:04:48.212673   72782 start.go:93] Provisioning new machine with config: &{Name:multinode-930892 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-930892 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:04:48.212753   72782 start.go:125] createHost starting for "m02" (driver="docker")
	I1205 20:04:48.215176   72782 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1205 20:04:48.215275   72782 start.go:159] libmachine.API.Create for "multinode-930892" (driver="docker")
	I1205 20:04:48.215297   72782 client.go:168] LocalClient.Create starting
	I1205 20:04:48.215358   72782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem
	I1205 20:04:48.215393   72782 main.go:141] libmachine: Decoding PEM data...
	I1205 20:04:48.215413   72782 main.go:141] libmachine: Parsing certificate...
	I1205 20:04:48.215467   72782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem
	I1205 20:04:48.215489   72782 main.go:141] libmachine: Decoding PEM data...
	I1205 20:04:48.215499   72782 main.go:141] libmachine: Parsing certificate...
	I1205 20:04:48.215723   72782 cli_runner.go:164] Run: docker network inspect multinode-930892 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 20:04:48.233805   72782 network_create.go:77] Found existing network {name:multinode-930892 subnet:0x40007a9a10 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1205 20:04:48.233847   72782 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-930892-m02" container
	I1205 20:04:48.233922   72782 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 20:04:48.252055   72782 cli_runner.go:164] Run: docker volume create multinode-930892-m02 --label name.minikube.sigs.k8s.io=multinode-930892-m02 --label created_by.minikube.sigs.k8s.io=true
	I1205 20:04:48.269528   72782 oci.go:103] Successfully created a docker volume multinode-930892-m02
	I1205 20:04:48.269612   72782 cli_runner.go:164] Run: docker run --rm --name multinode-930892-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-930892-m02 --entrypoint /usr/bin/test -v multinode-930892-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -d /var/lib
	I1205 20:04:48.828448   72782 oci.go:107] Successfully prepared a docker volume multinode-930892-m02
	I1205 20:04:48.828484   72782 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:04:48.828504   72782 kic.go:194] Starting extracting preloaded images to volume ...
	I1205 20:04:48.828583   72782 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-930892-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1205 20:04:53.171211   72782 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-930892-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f -I lz4 -xf /preloaded.tar -C /extractDir: (4.342583837s)
	I1205 20:04:53.171240   72782 kic.go:203] duration metric: took 4.342734 seconds to extract preloaded images to volume
	W1205 20:04:53.171362   72782 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 20:04:53.171474   72782 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 20:04:53.239601   72782 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-930892-m02 --name multinode-930892-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-930892-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-930892-m02 --network multinode-930892 --ip 192.168.58.3 --volume multinode-930892-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 20:04:53.582044   72782 cli_runner.go:164] Run: docker container inspect multinode-930892-m02 --format={{.State.Running}}
	I1205 20:04:53.621858   72782 cli_runner.go:164] Run: docker container inspect multinode-930892-m02 --format={{.State.Status}}
	I1205 20:04:53.650227   72782 cli_runner.go:164] Run: docker exec multinode-930892-m02 stat /var/lib/dpkg/alternatives/iptables
	I1205 20:04:53.712937   72782 oci.go:144] the created container "multinode-930892-m02" has a running status.
	I1205 20:04:53.712962   72782 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892-m02/id_rsa...
	I1205 20:04:54.183440   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1205 20:04:54.183528   72782 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 20:04:54.217052   72782 cli_runner.go:164] Run: docker container inspect multinode-930892-m02 --format={{.State.Status}}
	I1205 20:04:54.239203   72782 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 20:04:54.239225   72782 kic_runner.go:114] Args: [docker exec --privileged multinode-930892-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 20:04:54.333807   72782 cli_runner.go:164] Run: docker container inspect multinode-930892-m02 --format={{.State.Status}}
	I1205 20:04:54.371891   72782 machine.go:88] provisioning docker machine ...
	I1205 20:04:54.371921   72782 ubuntu.go:169] provisioning hostname "multinode-930892-m02"
	I1205 20:04:54.371989   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892-m02
	I1205 20:04:54.406243   72782 main.go:141] libmachine: Using SSH client type: native
	I1205 20:04:54.408671   72782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1205 20:04:54.408692   72782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-930892-m02 && echo "multinode-930892-m02" | sudo tee /etc/hostname
	I1205 20:04:54.605041   72782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-930892-m02
	
	I1205 20:04:54.605121   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892-m02
	I1205 20:04:54.632809   72782 main.go:141] libmachine: Using SSH client type: native
	I1205 20:04:54.633203   72782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1205 20:04:54.633221   72782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-930892-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-930892-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-930892-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:04:54.798852   72782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:04:54.798886   72782 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-2478/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-2478/.minikube}
	I1205 20:04:54.798902   72782 ubuntu.go:177] setting up certificates
	I1205 20:04:54.798910   72782 provision.go:83] configureAuth start
	I1205 20:04:54.798981   72782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-930892-m02
	I1205 20:04:54.816302   72782 provision.go:138] copyHostCerts
	I1205 20:04:54.816344   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem
	I1205 20:04:54.816377   72782 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem, removing ...
	I1205 20:04:54.816388   72782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem
	I1205 20:04:54.816464   72782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem (1078 bytes)
	I1205 20:04:54.816542   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem
	I1205 20:04:54.816566   72782 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem, removing ...
	I1205 20:04:54.816575   72782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem
	I1205 20:04:54.816604   72782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem (1123 bytes)
	I1205 20:04:54.816648   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem
	I1205 20:04:54.816671   72782 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem, removing ...
	I1205 20:04:54.816678   72782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem
	I1205 20:04:54.816703   72782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem (1679 bytes)
	I1205 20:04:54.816752   72782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem org=jenkins.multinode-930892-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-930892-m02]
	I1205 20:04:55.279164   72782 provision.go:172] copyRemoteCerts
	I1205 20:04:55.279272   72782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:04:55.279330   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892-m02
	I1205 20:04:55.296437   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892-m02/id_rsa Username:docker}
	I1205 20:04:55.402312   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:04:55.402369   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:04:55.430468   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:04:55.430530   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1205 20:04:55.457479   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:04:55.457537   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:04:55.484235   72782 provision.go:86] duration metric: configureAuth took 685.307865ms
	I1205 20:04:55.484300   72782 ubuntu.go:193] setting minikube options for container-runtime
	I1205 20:04:55.484514   72782 config.go:182] Loaded profile config "multinode-930892": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:04:55.484621   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892-m02
	I1205 20:04:55.502827   72782 main.go:141] libmachine: Using SSH client type: native
	I1205 20:04:55.503238   72782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1205 20:04:55.503258   72782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:04:55.763858   72782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:04:55.763882   72782 machine.go:91] provisioned docker machine in 1.391972092s
	I1205 20:04:55.763895   72782 client.go:171] LocalClient.Create took 7.548590739s
	I1205 20:04:55.763907   72782 start.go:167] duration metric: libmachine.API.Create for "multinode-930892" took 7.548632126s
	I1205 20:04:55.763915   72782 start.go:300] post-start starting for "multinode-930892-m02" (driver="docker")
	I1205 20:04:55.763929   72782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:04:55.763997   72782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:04:55.764041   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892-m02
	I1205 20:04:55.782063   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892-m02/id_rsa Username:docker}
	I1205 20:04:55.887869   72782 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:04:55.891691   72782 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1205 20:04:55.891711   72782 command_runner.go:130] > NAME="Ubuntu"
	I1205 20:04:55.891718   72782 command_runner.go:130] > VERSION_ID="22.04"
	I1205 20:04:55.891726   72782 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1205 20:04:55.891732   72782 command_runner.go:130] > VERSION_CODENAME=jammy
	I1205 20:04:55.891737   72782 command_runner.go:130] > ID=ubuntu
	I1205 20:04:55.891741   72782 command_runner.go:130] > ID_LIKE=debian
	I1205 20:04:55.891747   72782 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1205 20:04:55.891772   72782 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1205 20:04:55.891782   72782 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1205 20:04:55.891793   72782 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1205 20:04:55.891802   72782 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1205 20:04:55.892109   72782 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 20:04:55.892144   72782 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 20:04:55.892163   72782 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 20:04:55.892174   72782 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1205 20:04:55.892184   72782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/addons for local assets ...
	I1205 20:04:55.892245   72782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/files for local assets ...
	I1205 20:04:55.892326   72782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> 77732.pem in /etc/ssl/certs
	I1205 20:04:55.892337   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> /etc/ssl/certs/77732.pem
	I1205 20:04:55.892432   72782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:04:55.902462   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem --> /etc/ssl/certs/77732.pem (1708 bytes)
	I1205 20:04:55.929321   72782 start.go:303] post-start completed in 165.388324ms
	I1205 20:04:55.929690   72782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-930892-m02
	I1205 20:04:55.946729   72782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/config.json ...
	I1205 20:04:55.947005   72782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:04:55.947045   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892-m02
	I1205 20:04:55.965468   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892-m02/id_rsa Username:docker}
	I1205 20:04:56.065319   72782 command_runner.go:130] > 14%!
	(MISSING)I1205 20:04:56.065911   72782 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 20:04:56.071306   72782 command_runner.go:130] > 167G
	I1205 20:04:56.071654   72782 start.go:128] duration metric: createHost completed in 7.858888299s
	I1205 20:04:56.071669   72782 start.go:83] releasing machines lock for "multinode-930892-m02", held for 7.859012018s
	I1205 20:04:56.071745   72782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-930892-m02
	I1205 20:04:56.092475   72782 out.go:177] * Found network options:
	I1205 20:04:56.094217   72782 out.go:177]   - NO_PROXY=192.168.58.2
	W1205 20:04:56.095953   72782 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 20:04:56.095989   72782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:04:56.096056   72782 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:04:56.096100   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892-m02
	I1205 20:04:56.096344   72782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:04:56.096407   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892-m02
	I1205 20:04:56.113839   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892-m02/id_rsa Username:docker}
	I1205 20:04:56.117824   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892-m02/id_rsa Username:docker}
	I1205 20:04:56.361701   72782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 20:04:56.394871   72782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:04:56.399796   72782 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1205 20:04:56.399821   72782 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1205 20:04:56.399833   72782 command_runner.go:130] > Device: b3h/179d	Inode: 1088822     Links: 1
	I1205 20:04:56.399841   72782 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:04:56.399848   72782 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1205 20:04:56.399857   72782 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1205 20:04:56.399866   72782 command_runner.go:130] > Change: 2023-12-05 19:35:52.969728843 +0000
	I1205 20:04:56.399872   72782 command_runner.go:130] >  Birth: 2023-12-05 19:35:52.969728843 +0000
	I1205 20:04:56.400330   72782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:04:56.423072   72782 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 20:04:56.423194   72782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:04:56.458752   72782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1205 20:04:56.458787   72782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1205 20:04:56.458795   72782 start.go:475] detecting cgroup driver to use...
	I1205 20:04:56.458834   72782 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 20:04:56.458890   72782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:04:56.477617   72782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:04:56.490244   72782 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:04:56.490307   72782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:04:56.505302   72782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:04:56.521366   72782 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:04:56.610712   72782 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:04:56.710902   72782 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1205 20:04:56.710933   72782 docker.go:219] disabling docker service ...
	I1205 20:04:56.711001   72782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:04:56.731987   72782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:04:56.745771   72782 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:04:56.762190   72782 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1205 20:04:56.842734   72782 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:04:56.943049   72782 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1205 20:04:56.943128   72782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:04:56.956126   72782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:04:56.973400   72782 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 20:04:56.974566   72782 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:04:56.974655   72782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:04:56.986891   72782 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:04:56.986976   72782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:04:56.998217   72782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:04:57.009826   72782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:04:57.021561   72782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:04:57.033159   72782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:04:57.042643   72782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 20:04:57.043739   72782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:04:57.053956   72782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:04:57.150330   72782 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:04:57.263199   72782 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:04:57.263297   72782 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:04:57.267979   72782 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 20:04:57.268002   72782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 20:04:57.268010   72782 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I1205 20:04:57.268044   72782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:04:57.268056   72782 command_runner.go:130] > Access: 2023-12-05 20:04:57.249008766 +0000
	I1205 20:04:57.268064   72782 command_runner.go:130] > Modify: 2023-12-05 20:04:57.249008766 +0000
	I1205 20:04:57.268073   72782 command_runner.go:130] > Change: 2023-12-05 20:04:57.249008766 +0000
	I1205 20:04:57.268078   72782 command_runner.go:130] >  Birth: -
	I1205 20:04:57.268676   72782 start.go:543] Will wait 60s for crictl version
	I1205 20:04:57.268761   72782 ssh_runner.go:195] Run: which crictl
	I1205 20:04:57.272575   72782 command_runner.go:130] > /usr/bin/crictl
	I1205 20:04:57.272981   72782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:04:57.309531   72782 command_runner.go:130] > Version:  0.1.0
	I1205 20:04:57.309724   72782 command_runner.go:130] > RuntimeName:  cri-o
	I1205 20:04:57.309737   72782 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1205 20:04:57.309875   72782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 20:04:57.312447   72782 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1205 20:04:57.312555   72782 ssh_runner.go:195] Run: crio --version
	I1205 20:04:57.354168   72782 command_runner.go:130] > crio version 1.24.6
	I1205 20:04:57.354190   72782 command_runner.go:130] > Version:          1.24.6
	I1205 20:04:57.354199   72782 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1205 20:04:57.354205   72782 command_runner.go:130] > GitTreeState:     clean
	I1205 20:04:57.354235   72782 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1205 20:04:57.354250   72782 command_runner.go:130] > GoVersion:        go1.18.2
	I1205 20:04:57.354256   72782 command_runner.go:130] > Compiler:         gc
	I1205 20:04:57.354267   72782 command_runner.go:130] > Platform:         linux/arm64
	I1205 20:04:57.354273   72782 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:04:57.354283   72782 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:04:57.354307   72782 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:04:57.354314   72782 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:04:57.356145   72782 ssh_runner.go:195] Run: crio --version
	I1205 20:04:57.396580   72782 command_runner.go:130] > crio version 1.24.6
	I1205 20:04:57.396600   72782 command_runner.go:130] > Version:          1.24.6
	I1205 20:04:57.396609   72782 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1205 20:04:57.396644   72782 command_runner.go:130] > GitTreeState:     clean
	I1205 20:04:57.396658   72782 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1205 20:04:57.396664   72782 command_runner.go:130] > GoVersion:        go1.18.2
	I1205 20:04:57.396673   72782 command_runner.go:130] > Compiler:         gc
	I1205 20:04:57.396679   72782 command_runner.go:130] > Platform:         linux/arm64
	I1205 20:04:57.396689   72782 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:04:57.396713   72782 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:04:57.396725   72782 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:04:57.396739   72782 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:04:57.400360   72782 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1205 20:04:57.402111   72782 out.go:177]   - env NO_PROXY=192.168.58.2
	I1205 20:04:57.403831   72782 cli_runner.go:164] Run: docker network inspect multinode-930892 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 20:04:57.423183   72782 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1205 20:04:57.427532   72782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:04:57.440160   72782 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892 for IP: 192.168.58.3
	I1205 20:04:57.440192   72782 certs.go:190] acquiring lock for shared ca certs: {Name:mk8ef93a51958e82275f202c3866b092b6aa4ced Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:04:57.440326   72782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key
	I1205 20:04:57.440374   72782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key
	I1205 20:04:57.440388   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:04:57.440402   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:04:57.440414   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:04:57.440427   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:04:57.440478   72782 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/7773.pem (1338 bytes)
	W1205 20:04:57.440511   72782 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/7773_empty.pem, impossibly tiny 0 bytes
	I1205 20:04:57.440525   72782 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:04:57.440554   72782 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:04:57.440583   72782 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:04:57.440609   72782 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem (1679 bytes)
	I1205 20:04:57.440656   72782 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem (1708 bytes)
	I1205 20:04:57.440691   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/7773.pem -> /usr/share/ca-certificates/7773.pem
	I1205 20:04:57.440706   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> /usr/share/ca-certificates/77732.pem
	I1205 20:04:57.440719   72782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:04:57.441051   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:04:57.466961   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 20:04:57.493020   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:04:57.520285   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:04:57.546847   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/certs/7773.pem --> /usr/share/ca-certificates/7773.pem (1338 bytes)
	I1205 20:04:57.573538   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem --> /usr/share/ca-certificates/77732.pem (1708 bytes)
	I1205 20:04:57.599730   72782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:04:57.625647   72782 ssh_runner.go:195] Run: openssl version
	I1205 20:04:57.632449   72782 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1205 20:04:57.632522   72782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77732.pem && ln -fs /usr/share/ca-certificates/77732.pem /etc/ssl/certs/77732.pem"
	I1205 20:04:57.643614   72782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77732.pem
	I1205 20:04:57.647787   72782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/77732.pem
	I1205 20:04:57.647999   72782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/77732.pem
	I1205 20:04:57.648057   72782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77732.pem
	I1205 20:04:57.656224   72782 command_runner.go:130] > 3ec20f2e
	I1205 20:04:57.656328   72782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77732.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:04:57.667592   72782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:04:57.678829   72782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:04:57.683942   72782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:04:57.684230   72782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:04:57.684312   72782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:04:57.692202   72782 command_runner.go:130] > b5213941
	I1205 20:04:57.692616   72782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:04:57.703686   72782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7773.pem && ln -fs /usr/share/ca-certificates/7773.pem /etc/ssl/certs/7773.pem"
	I1205 20:04:57.714213   72782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7773.pem
	I1205 20:04:57.718264   72782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/7773.pem
	I1205 20:04:57.718321   72782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/7773.pem
	I1205 20:04:57.718371   72782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7773.pem
	I1205 20:04:57.726481   72782 command_runner.go:130] > 51391683
	I1205 20:04:57.726582   72782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7773.pem /etc/ssl/certs/51391683.0"
	I1205 20:04:57.737356   72782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:04:57.741344   72782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:04:57.741388   72782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:04:57.741494   72782 ssh_runner.go:195] Run: crio config
	I1205 20:04:57.797014   72782 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 20:04:57.797043   72782 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 20:04:57.797053   72782 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 20:04:57.797057   72782 command_runner.go:130] > #
	I1205 20:04:57.797066   72782 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 20:04:57.797078   72782 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 20:04:57.797086   72782 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 20:04:57.797100   72782 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 20:04:57.797105   72782 command_runner.go:130] > # reload'.
	I1205 20:04:57.797115   72782 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 20:04:57.797123   72782 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 20:04:57.797137   72782 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 20:04:57.797145   72782 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 20:04:57.797152   72782 command_runner.go:130] > [crio]
	I1205 20:04:57.797160   72782 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 20:04:57.797168   72782 command_runner.go:130] > # containers images, in this directory.
	I1205 20:04:57.797929   72782 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1205 20:04:57.797948   72782 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 20:04:57.798611   72782 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1205 20:04:57.798628   72782 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 20:04:57.798636   72782 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 20:04:57.799341   72782 command_runner.go:130] > # storage_driver = "vfs"
	I1205 20:04:57.799357   72782 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 20:04:57.799371   72782 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 20:04:57.799700   72782 command_runner.go:130] > # storage_option = [
	I1205 20:04:57.800109   72782 command_runner.go:130] > # ]
	I1205 20:04:57.800126   72782 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 20:04:57.800140   72782 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 20:04:57.800793   72782 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 20:04:57.800808   72782 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 20:04:57.800817   72782 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 20:04:57.800826   72782 command_runner.go:130] > # always happen on a node reboot
	I1205 20:04:57.801519   72782 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 20:04:57.801547   72782 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 20:04:57.801558   72782 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 20:04:57.801567   72782 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 20:04:57.802253   72782 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1205 20:04:57.802271   72782 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 20:04:57.802282   72782 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 20:04:57.802942   72782 command_runner.go:130] > # internal_wipe = true
	I1205 20:04:57.802957   72782 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 20:04:57.802965   72782 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 20:04:57.802972   72782 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 20:04:57.803648   72782 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 20:04:57.803663   72782 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 20:04:57.803669   72782 command_runner.go:130] > [crio.api]
	I1205 20:04:57.803675   72782 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 20:04:57.804084   72782 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 20:04:57.804138   72782 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 20:04:57.804283   72782 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 20:04:57.804324   72782 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 20:04:57.804344   72782 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 20:04:57.804589   72782 command_runner.go:130] > # stream_port = "0"
	I1205 20:04:57.804633   72782 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 20:04:57.804654   72782 command_runner.go:130] > # stream_enable_tls = false
	I1205 20:04:57.804676   72782 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 20:04:57.804710   72782 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 20:04:57.804737   72782 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 20:04:57.804759   72782 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 20:04:57.804790   72782 command_runner.go:130] > # minutes.
	I1205 20:04:57.804813   72782 command_runner.go:130] > # stream_tls_cert = ""
	I1205 20:04:57.804834   72782 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 20:04:57.804868   72782 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 20:04:57.804890   72782 command_runner.go:130] > # stream_tls_key = ""
	I1205 20:04:57.804910   72782 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 20:04:57.804944   72782 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 20:04:57.804967   72782 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 20:04:57.804985   72782 command_runner.go:130] > # stream_tls_ca = ""
	I1205 20:04:57.805020   72782 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:04:57.805041   72782 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1205 20:04:57.805063   72782 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:04:57.805081   72782 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1205 20:04:57.805124   72782 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 20:04:57.805144   72782 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 20:04:57.805178   72782 command_runner.go:130] > [crio.runtime]
	I1205 20:04:57.805202   72782 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 20:04:57.805222   72782 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 20:04:57.805238   72782 command_runner.go:130] > # "nofile=1024:2048"
	I1205 20:04:57.805270   72782 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 20:04:57.805292   72782 command_runner.go:130] > # default_ulimits = [
	I1205 20:04:57.805311   72782 command_runner.go:130] > # ]
	I1205 20:04:57.805343   72782 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 20:04:57.805364   72782 command_runner.go:130] > # no_pivot = false
	I1205 20:04:57.805384   72782 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 20:04:57.805405   72782 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 20:04:57.805434   72782 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 20:04:57.805459   72782 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 20:04:57.805478   72782 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 20:04:57.805514   72782 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:04:57.805535   72782 command_runner.go:130] > # conmon = ""
	I1205 20:04:57.805553   72782 command_runner.go:130] > # Cgroup setting for conmon
	I1205 20:04:57.805588   72782 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 20:04:57.805611   72782 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 20:04:57.805634   72782 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 20:04:57.805666   72782 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 20:04:57.805692   72782 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:04:57.805710   72782 command_runner.go:130] > # conmon_env = [
	I1205 20:04:57.805726   72782 command_runner.go:130] > # ]
	I1205 20:04:57.805759   72782 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 20:04:57.805786   72782 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 20:04:57.805817   72782 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 20:04:57.805845   72782 command_runner.go:130] > # default_env = [
	I1205 20:04:57.805856   72782 command_runner.go:130] > # ]
	I1205 20:04:57.805864   72782 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 20:04:57.805869   72782 command_runner.go:130] > # selinux = false
	I1205 20:04:57.805877   72782 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 20:04:57.805885   72782 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 20:04:57.805895   72782 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 20:04:57.805903   72782 command_runner.go:130] > # seccomp_profile = ""
	I1205 20:04:57.805913   72782 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 20:04:57.805920   72782 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 20:04:57.805931   72782 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 20:04:57.805937   72782 command_runner.go:130] > # which might increase security.
	I1205 20:04:57.807837   72782 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1205 20:04:57.807853   72782 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 20:04:57.807861   72782 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 20:04:57.807869   72782 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 20:04:57.807877   72782 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 20:04:57.807883   72782 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:04:57.807891   72782 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 20:04:57.807901   72782 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 20:04:57.807907   72782 command_runner.go:130] > # the cgroup blockio controller.
	I1205 20:04:57.807915   72782 command_runner.go:130] > # blockio_config_file = ""
	I1205 20:04:57.807924   72782 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 20:04:57.807929   72782 command_runner.go:130] > # irqbalance daemon.
	I1205 20:04:57.807936   72782 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 20:04:57.807946   72782 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 20:04:57.807953   72782 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:04:57.808156   72782 command_runner.go:130] > # rdt_config_file = ""
	I1205 20:04:57.808168   72782 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 20:04:57.808174   72782 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 20:04:57.808182   72782 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 20:04:57.808187   72782 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 20:04:57.808195   72782 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 20:04:57.808205   72782 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 20:04:57.808210   72782 command_runner.go:130] > # will be added.
	I1205 20:04:57.808219   72782 command_runner.go:130] > # default_capabilities = [
	I1205 20:04:57.808225   72782 command_runner.go:130] > # 	"CHOWN",
	I1205 20:04:57.808239   72782 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 20:04:57.808244   72782 command_runner.go:130] > # 	"FSETID",
	I1205 20:04:57.808249   72782 command_runner.go:130] > # 	"FOWNER",
	I1205 20:04:57.808259   72782 command_runner.go:130] > # 	"SETGID",
	I1205 20:04:57.808264   72782 command_runner.go:130] > # 	"SETUID",
	I1205 20:04:57.808269   72782 command_runner.go:130] > # 	"SETPCAP",
	I1205 20:04:57.808273   72782 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 20:04:57.808282   72782 command_runner.go:130] > # 	"KILL",
	I1205 20:04:57.808289   72782 command_runner.go:130] > # ]
	I1205 20:04:57.808299   72782 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 20:04:57.808310   72782 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 20:04:57.808316   72782 command_runner.go:130] > # add_inheritable_capabilities = true
	I1205 20:04:57.808325   72782 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 20:04:57.808336   72782 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:04:57.808341   72782 command_runner.go:130] > # default_sysctls = [
	I1205 20:04:57.808347   72782 command_runner.go:130] > # ]
	I1205 20:04:57.808353   72782 command_runner.go:130] > # List of devices on the host that a
	I1205 20:04:57.808361   72782 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 20:04:57.808369   72782 command_runner.go:130] > # allowed_devices = [
	I1205 20:04:57.808374   72782 command_runner.go:130] > # 	"/dev/fuse",
	I1205 20:04:57.808381   72782 command_runner.go:130] > # ]
	I1205 20:04:57.808387   72782 command_runner.go:130] > # List of additional devices. specified as
	I1205 20:04:57.808404   72782 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 20:04:57.808415   72782 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 20:04:57.808425   72782 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:04:57.808431   72782 command_runner.go:130] > # additional_devices = [
	I1205 20:04:57.808437   72782 command_runner.go:130] > # ]
	I1205 20:04:57.808444   72782 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 20:04:57.808449   72782 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 20:04:57.808454   72782 command_runner.go:130] > # 	"/etc/cdi",
	I1205 20:04:57.808462   72782 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 20:04:57.808467   72782 command_runner.go:130] > # ]
	I1205 20:04:57.808476   72782 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 20:04:57.808494   72782 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 20:04:57.808499   72782 command_runner.go:130] > # Defaults to false.
	I1205 20:04:57.808506   72782 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 20:04:57.808516   72782 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 20:04:57.808524   72782 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 20:04:57.808532   72782 command_runner.go:130] > # hooks_dir = [
	I1205 20:04:57.808538   72782 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 20:04:57.808542   72782 command_runner.go:130] > # ]
	I1205 20:04:57.808549   72782 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 20:04:57.808562   72782 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 20:04:57.808570   72782 command_runner.go:130] > # its default mounts from the following two files:
	I1205 20:04:57.808574   72782 command_runner.go:130] > #
	I1205 20:04:57.808585   72782 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 20:04:57.808595   72782 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 20:04:57.808602   72782 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 20:04:57.808608   72782 command_runner.go:130] > #
	I1205 20:04:57.808616   72782 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 20:04:57.808626   72782 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 20:04:57.808634   72782 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 20:04:57.808643   72782 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 20:04:57.808647   72782 command_runner.go:130] > #
	I1205 20:04:57.808655   72782 command_runner.go:130] > # default_mounts_file = ""
	I1205 20:04:57.808662   72782 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 20:04:57.808672   72782 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 20:04:57.808677   72782 command_runner.go:130] > # pids_limit = 0
	I1205 20:04:57.808687   72782 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 20:04:57.808697   72782 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 20:04:57.808706   72782 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 20:04:57.808718   72782 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 20:04:57.808723   72782 command_runner.go:130] > # log_size_max = -1
	I1205 20:04:57.808737   72782 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1205 20:04:57.808744   72782 command_runner.go:130] > # log_to_journald = false
	I1205 20:04:57.808752   72782 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 20:04:57.808758   72782 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 20:04:57.808767   72782 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 20:04:57.808773   72782 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 20:04:57.808782   72782 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 20:04:57.808787   72782 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 20:04:57.808794   72782 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 20:04:57.808801   72782 command_runner.go:130] > # read_only = false
	I1205 20:04:57.808809   72782 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 20:04:57.808819   72782 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 20:04:57.808824   72782 command_runner.go:130] > # live configuration reload.
	I1205 20:04:57.808829   72782 command_runner.go:130] > # log_level = "info"
	I1205 20:04:57.808837   72782 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 20:04:57.808847   72782 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:04:57.808853   72782 command_runner.go:130] > # log_filter = ""
	I1205 20:04:57.808862   72782 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 20:04:57.808872   72782 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 20:04:57.808877   72782 command_runner.go:130] > # separated by comma.
	I1205 20:04:57.808882   72782 command_runner.go:130] > # uid_mappings = ""
	I1205 20:04:57.808890   72782 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 20:04:57.808899   72782 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 20:04:57.808904   72782 command_runner.go:130] > # separated by comma.
	I1205 20:04:57.808912   72782 command_runner.go:130] > # gid_mappings = ""
	I1205 20:04:57.808919   72782 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 20:04:57.808926   72782 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:04:57.808936   72782 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:04:57.808942   72782 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 20:04:57.808952   72782 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 20:04:57.808959   72782 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:04:57.808967   72782 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:04:57.808974   72782 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 20:04:57.808984   72782 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 20:04:57.808992   72782 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 20:04:57.809001   72782 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 20:04:57.809007   72782 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 20:04:57.809014   72782 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 20:04:57.809024   72782 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 20:04:57.809030   72782 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 20:04:57.809038   72782 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 20:04:57.809043   72782 command_runner.go:130] > # drop_infra_ctr = true
	I1205 20:04:57.809051   72782 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 20:04:57.809060   72782 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 20:04:57.809069   72782 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 20:04:57.809077   72782 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 20:04:57.809084   72782 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 20:04:57.809091   72782 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 20:04:57.809100   72782 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 20:04:57.809109   72782 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 20:04:57.809117   72782 command_runner.go:130] > # pinns_path = ""
	I1205 20:04:57.809125   72782 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 20:04:57.809133   72782 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1205 20:04:57.809141   72782 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1205 20:04:57.809155   72782 command_runner.go:130] > # default_runtime = "runc"
	I1205 20:04:57.809162   72782 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 20:04:57.809171   72782 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 20:04:57.809185   72782 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1205 20:04:57.809194   72782 command_runner.go:130] > # creation as a file is not desired either.
	I1205 20:04:57.809205   72782 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 20:04:57.809212   72782 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 20:04:57.809217   72782 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 20:04:57.809224   72782 command_runner.go:130] > # ]
	I1205 20:04:57.809231   72782 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 20:04:57.809242   72782 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 20:04:57.809250   72782 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1205 20:04:57.809258   72782 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1205 20:04:57.809264   72782 command_runner.go:130] > #
	I1205 20:04:57.809270   72782 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1205 20:04:57.809277   72782 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1205 20:04:57.809285   72782 command_runner.go:130] > #  runtime_type = "oci"
	I1205 20:04:57.809291   72782 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1205 20:04:57.809297   72782 command_runner.go:130] > #  privileged_without_host_devices = false
	I1205 20:04:57.809306   72782 command_runner.go:130] > #  allowed_annotations = []
	I1205 20:04:57.809311   72782 command_runner.go:130] > # Where:
	I1205 20:04:57.809326   72782 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1205 20:04:57.809334   72782 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1205 20:04:57.809342   72782 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 20:04:57.809353   72782 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 20:04:57.809358   72782 command_runner.go:130] > #   in $PATH.
	I1205 20:04:57.809368   72782 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1205 20:04:57.809375   72782 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 20:04:57.809382   72782 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1205 20:04:57.809392   72782 command_runner.go:130] > #   state.
	I1205 20:04:57.809399   72782 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 20:04:57.809412   72782 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 20:04:57.809420   72782 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 20:04:57.809430   72782 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 20:04:57.809438   72782 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 20:04:57.809449   72782 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 20:04:57.809455   72782 command_runner.go:130] > #   The currently recognized values are:
	I1205 20:04:57.809463   72782 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 20:04:57.809471   72782 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 20:04:57.809482   72782 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 20:04:57.809492   72782 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 20:04:57.809501   72782 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 20:04:57.809511   72782 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 20:04:57.809519   72782 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 20:04:57.809530   72782 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1205 20:04:57.809536   72782 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 20:04:57.809542   72782 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 20:04:57.809550   72782 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1205 20:04:57.809555   72782 command_runner.go:130] > runtime_type = "oci"
	I1205 20:04:57.809562   72782 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 20:04:57.809570   72782 command_runner.go:130] > runtime_config_path = ""
	I1205 20:04:57.809577   72782 command_runner.go:130] > monitor_path = ""
	I1205 20:04:57.809584   72782 command_runner.go:130] > monitor_cgroup = ""
	I1205 20:04:57.809590   72782 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 20:04:57.809620   72782 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1205 20:04:57.809628   72782 command_runner.go:130] > # running containers
	I1205 20:04:57.809633   72782 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1205 20:04:57.809641   72782 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1205 20:04:57.809651   72782 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1205 20:04:57.809661   72782 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1205 20:04:57.809667   72782 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1205 20:04:57.809673   72782 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1205 20:04:57.809681   72782 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1205 20:04:57.809687   72782 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1205 20:04:57.809699   72782 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1205 20:04:57.809705   72782 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1205 20:04:57.809714   72782 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 20:04:57.809720   72782 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 20:04:57.809730   72782 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 20:04:57.809741   72782 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 20:04:57.809754   72782 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 20:04:57.809761   72782 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 20:04:57.809774   72782 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 20:04:57.809790   72782 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 20:04:57.809798   72782 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 20:04:57.809807   72782 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 20:04:57.809812   72782 command_runner.go:130] > # Example:
	I1205 20:04:57.809820   72782 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 20:04:57.809826   72782 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 20:04:57.809835   72782 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 20:04:57.809841   72782 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 20:04:57.809846   72782 command_runner.go:130] > # cpuset = 0
	I1205 20:04:57.809853   72782 command_runner.go:130] > # cpushares = "0-1"
	I1205 20:04:57.809858   72782 command_runner.go:130] > # Where:
	I1205 20:04:57.809864   72782 command_runner.go:130] > # The workload name is workload-type.
	I1205 20:04:57.809875   72782 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 20:04:57.809882   72782 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 20:04:57.809889   72782 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 20:04:57.809901   72782 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 20:04:57.809910   72782 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 20:04:57.809915   72782 command_runner.go:130] > # 
	I1205 20:04:57.809923   72782 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 20:04:57.809930   72782 command_runner.go:130] > #
	I1205 20:04:57.809937   72782 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 20:04:57.809947   72782 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 20:04:57.809957   72782 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 20:04:57.809965   72782 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 20:04:57.809972   72782 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 20:04:57.809983   72782 command_runner.go:130] > [crio.image]
	I1205 20:04:57.809995   72782 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 20:04:57.810001   72782 command_runner.go:130] > # default_transport = "docker://"
	I1205 20:04:57.810009   72782 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 20:04:57.810020   72782 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:04:57.810026   72782 command_runner.go:130] > # global_auth_file = ""
	I1205 20:04:57.810034   72782 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 20:04:57.810041   72782 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:04:57.810048   72782 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1205 20:04:57.810056   72782 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 20:04:57.810063   72782 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:04:57.810072   72782 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:04:57.810077   72782 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 20:04:57.810084   72782 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 20:04:57.810095   72782 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 20:04:57.810102   72782 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 20:04:57.810112   72782 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 20:04:57.810118   72782 command_runner.go:130] > # pause_command = "/pause"
	I1205 20:04:57.810125   72782 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 20:04:57.810133   72782 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 20:04:57.810143   72782 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 20:04:57.810150   72782 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 20:04:57.810157   72782 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 20:04:57.810166   72782 command_runner.go:130] > # signature_policy = ""
	I1205 20:04:57.810174   72782 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 20:04:57.810185   72782 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 20:04:57.810190   72782 command_runner.go:130] > # changing them here.
	I1205 20:04:57.810197   72782 command_runner.go:130] > # insecure_registries = [
	I1205 20:04:57.810202   72782 command_runner.go:130] > # ]
	I1205 20:04:57.810209   72782 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 20:04:57.810216   72782 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 20:04:57.810223   72782 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 20:04:57.810231   72782 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 20:04:57.810238   72782 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 20:04:57.810246   72782 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 20:04:57.810251   72782 command_runner.go:130] > # CNI plugins.
	I1205 20:04:57.810258   72782 command_runner.go:130] > [crio.network]
	I1205 20:04:57.810266   72782 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 20:04:57.810278   72782 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 20:04:57.810283   72782 command_runner.go:130] > # cni_default_network = ""
	I1205 20:04:57.810291   72782 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 20:04:57.810297   72782 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 20:04:57.810304   72782 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 20:04:57.810312   72782 command_runner.go:130] > # plugin_dirs = [
	I1205 20:04:57.810317   72782 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 20:04:57.810322   72782 command_runner.go:130] > # ]
	I1205 20:04:57.810329   72782 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 20:04:57.810336   72782 command_runner.go:130] > [crio.metrics]
	I1205 20:04:57.810342   72782 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 20:04:57.810350   72782 command_runner.go:130] > # enable_metrics = false
	I1205 20:04:57.810356   72782 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 20:04:57.810364   72782 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 20:04:57.810372   72782 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 20:04:57.810379   72782 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 20:04:57.810387   72782 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 20:04:57.810393   72782 command_runner.go:130] > # metrics_collectors = [
	I1205 20:04:57.810399   72782 command_runner.go:130] > # 	"operations",
	I1205 20:04:57.810407   72782 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 20:04:57.810413   72782 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 20:04:57.810418   72782 command_runner.go:130] > # 	"operations_errors",
	I1205 20:04:57.810423   72782 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 20:04:57.810431   72782 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 20:04:57.810437   72782 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 20:04:57.810449   72782 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 20:04:57.810454   72782 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 20:04:57.810459   72782 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 20:04:57.810465   72782 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 20:04:57.810470   72782 command_runner.go:130] > # 	"containers_oom_total",
	I1205 20:04:57.810475   72782 command_runner.go:130] > # 	"containers_oom",
	I1205 20:04:57.810485   72782 command_runner.go:130] > # 	"processes_defunct",
	I1205 20:04:57.810491   72782 command_runner.go:130] > # 	"operations_total",
	I1205 20:04:57.810497   72782 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 20:04:57.810506   72782 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 20:04:57.810511   72782 command_runner.go:130] > # 	"operations_errors_total",
	I1205 20:04:57.810522   72782 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 20:04:57.810527   72782 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 20:04:57.810533   72782 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 20:04:57.810539   72782 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 20:04:57.810544   72782 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 20:04:57.810551   72782 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 20:04:57.810559   72782 command_runner.go:130] > # ]
	I1205 20:04:57.810565   72782 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 20:04:57.810570   72782 command_runner.go:130] > # metrics_port = 9090
	I1205 20:04:57.810583   72782 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 20:04:57.810588   72782 command_runner.go:130] > # metrics_socket = ""
	I1205 20:04:57.810597   72782 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 20:04:57.810606   72782 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 20:04:57.810614   72782 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 20:04:57.810620   72782 command_runner.go:130] > # certificate on any modification event.
	I1205 20:04:57.810625   72782 command_runner.go:130] > # metrics_cert = ""
	I1205 20:04:57.810633   72782 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 20:04:57.810640   72782 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 20:04:57.810647   72782 command_runner.go:130] > # metrics_key = ""
	I1205 20:04:57.810655   72782 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 20:04:57.810659   72782 command_runner.go:130] > [crio.tracing]
	I1205 20:04:57.810668   72782 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 20:04:57.810673   72782 command_runner.go:130] > # enable_tracing = false
	I1205 20:04:57.810683   72782 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 20:04:57.810691   72782 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 20:04:57.810698   72782 command_runner.go:130] > # Number of samples to collect per million spans.
	I1205 20:04:57.810704   72782 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 20:04:57.810712   72782 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 20:04:57.810717   72782 command_runner.go:130] > [crio.stats]
	I1205 20:04:57.810726   72782 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 20:04:57.810735   72782 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 20:04:57.810749   72782 command_runner.go:130] > # stats_collection_period = 0
	I1205 20:04:57.810769   72782 command_runner.go:130] ! time="2023-12-05 20:04:57.794662966Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1205 20:04:57.810785   72782 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 20:04:57.810837   72782 cni.go:84] Creating CNI manager for ""
	I1205 20:04:57.810850   72782 cni.go:136] 2 nodes found, recommending kindnet
	I1205 20:04:57.810861   72782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:04:57.810882   72782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-930892 NodeName:multinode-930892-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:04:57.811001   72782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-930892-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:04:57.811058   72782 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-930892-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-930892 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:04:57.811121   72782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:04:57.821045   72782 command_runner.go:130] > kubeadm
	I1205 20:04:57.821062   72782 command_runner.go:130] > kubectl
	I1205 20:04:57.821068   72782 command_runner.go:130] > kubelet
	I1205 20:04:57.821080   72782 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:04:57.821131   72782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1205 20:04:57.830633   72782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:04:57.850524   72782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:04:57.870424   72782 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1205 20:04:57.874463   72782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:04:57.886834   72782 host.go:66] Checking if "multinode-930892" exists ...
	I1205 20:04:57.887100   72782 start.go:304] JoinCluster: &{Name:multinode-930892 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-930892 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:04:57.887180   72782 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:04:57.887228   72782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892
	I1205 20:04:57.887674   72782 config.go:182] Loaded profile config "multinode-930892": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:04:57.904138   72782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892/id_rsa Username:docker}
	I1205 20:04:58.079540   72782 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token i17axc.sr1wjgkhoknfqew7 --discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 
	I1205 20:04:58.079584   72782 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:04:58.079630   72782 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i17axc.sr1wjgkhoknfqew7 --discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-930892-m02"
	I1205 20:04:58.124004   72782 command_runner.go:130] ! W1205 20:04:58.123585    1028 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1205 20:04:58.164859   72782 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1205 20:04:58.253093   72782 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:05:00.972425   72782 command_runner.go:130] > [preflight] Running pre-flight checks
	I1205 20:05:00.972450   72782 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1205 20:05:00.972458   72782 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1050-aws
	I1205 20:05:00.972464   72782 command_runner.go:130] > OS: Linux
	I1205 20:05:00.972470   72782 command_runner.go:130] > CGROUPS_CPU: enabled
	I1205 20:05:00.972478   72782 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1205 20:05:00.972484   72782 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1205 20:05:00.972490   72782 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1205 20:05:00.972496   72782 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1205 20:05:00.972502   72782 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1205 20:05:00.972509   72782 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1205 20:05:00.972515   72782 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1205 20:05:00.972521   72782 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1205 20:05:00.972528   72782 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1205 20:05:00.972537   72782 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1205 20:05:00.972545   72782 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:05:00.972554   72782 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:05:00.972560   72782 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1205 20:05:00.972570   72782 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1205 20:05:00.972576   72782 command_runner.go:130] > This node has joined the cluster:
	I1205 20:05:00.972585   72782 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1205 20:05:00.972592   72782 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1205 20:05:00.972602   72782 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1205 20:05:00.972614   72782 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i17axc.sr1wjgkhoknfqew7 --discovery-token-ca-cert-hash sha256:6da2d77b39f3e1ef9cef384839cc68d840e02bf2206be4d2a37e26b3d0a71759 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-930892-m02": (2.892971475s)
	I1205 20:05:00.972629   72782 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:05:01.183307   72782 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1205 20:05:01.183435   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=multinode-930892 minikube.k8s.io/updated_at=2023_12_05T20_05_01_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:05:01.295518   72782 command_runner.go:130] > node/multinode-930892-m02 labeled
	I1205 20:05:01.299140   72782 start.go:306] JoinCluster complete in 3.412037184s
	I1205 20:05:01.299164   72782 cni.go:84] Creating CNI manager for ""
	I1205 20:05:01.299170   72782 cni.go:136] 2 nodes found, recommending kindnet
	I1205 20:05:01.299219   72782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:05:01.303693   72782 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1205 20:05:01.303711   72782 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1205 20:05:01.303719   72782 command_runner.go:130] > Device: 3ah/58d	Inode: 1092525     Links: 1
	I1205 20:05:01.303726   72782 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:05:01.303735   72782 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1205 20:05:01.303741   72782 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1205 20:05:01.303747   72782 command_runner.go:130] > Change: 2023-12-05 19:35:53.617733280 +0000
	I1205 20:05:01.303771   72782 command_runner.go:130] >  Birth: 2023-12-05 19:35:53.577733006 +0000
	I1205 20:05:01.303805   72782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 20:05:01.303816   72782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 20:05:01.324489   72782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:05:01.610020   72782 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1205 20:05:01.615082   72782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1205 20:05:01.618277   72782 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1205 20:05:01.632091   72782 command_runner.go:130] > daemonset.apps/kindnet configured
	I1205 20:05:01.638008   72782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 20:05:01.638270   72782 kapi.go:59] client config for multinode-930892: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.key", CAFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:05:01.638580   72782 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:05:01.638596   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:01.638605   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:01.638612   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:01.641018   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:01.641040   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:01.641049   72782 round_trippers.go:580]     Content-Length: 291
	I1205 20:05:01.641056   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:01 GMT
	I1205 20:05:01.641062   72782 round_trippers.go:580]     Audit-Id: 16cf9fb7-4096-48b8-b724-64c9aab1789e
	I1205 20:05:01.641069   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:01.641075   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:01.641086   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:01.641093   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:01.641118   72782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"29176da8-1129-498f-981f-e9a68ede7ad4","resourceVersion":"460","creationTimestamp":"2023-12-05T20:03:59Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1205 20:05:01.641205   72782 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-930892" context rescaled to 1 replicas
	I1205 20:05:01.641235   72782 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:05:01.643904   72782 out.go:177] * Verifying Kubernetes components...
	I1205 20:05:01.645983   72782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:05:01.659737   72782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 20:05:01.660065   72782 kapi.go:59] client config for multinode-930892: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/profiles/multinode-930892/client.key", CAFile:"/home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:05:01.660351   72782 node_ready.go:35] waiting up to 6m0s for node "multinode-930892-m02" to be "Ready" ...
	I1205 20:05:01.660423   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:01.660432   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:01.660441   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:01.660452   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:01.663072   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:01.663090   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:01.663098   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:01.663105   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:01 GMT
	I1205 20:05:01.663111   72782 round_trippers.go:580]     Audit-Id: 6ae826ed-6397-4e02-a749-8d47a52eb9ea
	I1205 20:05:01.663117   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:01.663126   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:01.663135   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:01.663588   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"497","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1205 20:05:01.664029   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:01.664047   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:01.664057   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:01.664064   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:01.666072   72782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:05:01.666092   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:01.666101   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:01.666108   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:01 GMT
	I1205 20:05:01.666114   72782 round_trippers.go:580]     Audit-Id: cd8a162f-4257-4d4f-a758-7c0c1f7d312c
	I1205 20:05:01.666123   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:01.666133   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:01.666139   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:01.666236   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"497","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1205 20:05:02.167165   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:02.167187   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:02.167197   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:02.167204   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:02.169624   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:02.169647   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:02.169655   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:02.169662   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:02.169669   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:02 GMT
	I1205 20:05:02.169675   72782 round_trippers.go:580]     Audit-Id: 6c42a888-6c2a-4a43-a53a-32929c599cf6
	I1205 20:05:02.169681   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:02.169687   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:02.169800   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"497","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1205 20:05:02.666816   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:02.666837   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:02.666847   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:02.666854   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:02.669276   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:02.669298   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:02.669306   72782 round_trippers.go:580]     Audit-Id: d932fa80-1035-42bc-a637-1bb877d3a1b2
	I1205 20:05:02.669313   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:02.669319   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:02.669325   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:02.669338   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:02.669344   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:02 GMT
	I1205 20:05:02.669460   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"497","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1205 20:05:03.167137   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:03.167165   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:03.167174   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:03.167181   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:03.169320   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:03.169343   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:03.169351   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:03.169357   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:03.169364   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:03.169374   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:03 GMT
	I1205 20:05:03.169383   72782 round_trippers.go:580]     Audit-Id: e35dc0b1-b074-4112-8eb7-f34252a2d761
	I1205 20:05:03.169389   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:03.169794   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:03.666813   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:03.666881   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:03.666907   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:03.666939   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:03.669641   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:03.669660   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:03.669668   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:03.669675   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:03.669681   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:03.669688   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:03 GMT
	I1205 20:05:03.669694   72782 round_trippers.go:580]     Audit-Id: 675ce6de-eb95-45df-b3ce-882133571133
	I1205 20:05:03.669700   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:03.669946   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:03.670320   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:04.166821   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:04.166843   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:04.166852   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:04.166859   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:04.169177   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:04.169192   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:04.169200   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:04.169207   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:04 GMT
	I1205 20:05:04.169213   72782 round_trippers.go:580]     Audit-Id: c31194ff-7831-4c5b-8d61-507dac60328b
	I1205 20:05:04.169224   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:04.169231   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:04.169237   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:04.169456   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:04.667615   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:04.667637   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:04.667647   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:04.667655   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:04.670416   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:04.670437   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:04.670445   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:04.670452   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:04.670459   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:04 GMT
	I1205 20:05:04.670466   72782 round_trippers.go:580]     Audit-Id: b82efedf-40c7-48a5-84bc-4b14400cf0f8
	I1205 20:05:04.670472   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:04.670479   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:04.670659   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:05.167813   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:05.167847   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:05.167857   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:05.167866   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:05.170354   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:05.170379   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:05.170387   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:05.170393   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:05.170401   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:05.170407   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:05.170418   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:05 GMT
	I1205 20:05:05.170424   72782 round_trippers.go:580]     Audit-Id: 55c82a99-ddaa-4a68-b1fc-7472f6800cc5
	I1205 20:05:05.170647   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:05.667495   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:05.667517   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:05.667527   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:05.667534   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:05.670018   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:05.670038   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:05.670047   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:05.670053   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:05.670060   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:05.670066   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:05.670076   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:05 GMT
	I1205 20:05:05.670086   72782 round_trippers.go:580]     Audit-Id: 917f4a13-677f-45d4-af69-205b7fee0521
	I1205 20:05:05.670418   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:05.670828   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:06.166837   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:06.166858   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:06.166869   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:06.166876   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:06.169184   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:06.169207   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:06.169215   72782 round_trippers.go:580]     Audit-Id: 1b5fa8b3-05c4-41f9-8235-430b892d2103
	I1205 20:05:06.169221   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:06.169228   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:06.169234   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:06.169241   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:06.169248   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:06 GMT
	I1205 20:05:06.169451   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:06.667086   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:06.667107   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:06.667117   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:06.667124   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:06.669458   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:06.669480   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:06.669492   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:06.669499   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:06.669505   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:06.669511   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:06.669520   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:06 GMT
	I1205 20:05:06.669526   72782 round_trippers.go:580]     Audit-Id: f182393a-d186-4023-97d9-98944d030c40
	I1205 20:05:06.669678   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:07.166823   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:07.166847   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:07.166857   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:07.166867   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:07.169473   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:07.169492   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:07.169500   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:07.169507   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:07.169513   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:07.169520   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:07 GMT
	I1205 20:05:07.169526   72782 round_trippers.go:580]     Audit-Id: 62e84a1f-3a29-406e-b4a4-ef06e2023248
	I1205 20:05:07.169533   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:07.169688   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:07.667808   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:07.667832   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:07.667842   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:07.667854   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:07.670415   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:07.670437   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:07.670445   72782 round_trippers.go:580]     Audit-Id: 1d064697-381b-4ac9-9d13-be4ff7224909
	I1205 20:05:07.670452   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:07.670458   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:07.670464   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:07.670470   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:07.670477   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:07 GMT
	I1205 20:05:07.670580   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:07.670965   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:08.167673   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:08.167696   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:08.167706   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:08.167714   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:08.170063   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:08.170085   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:08.170094   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:08 GMT
	I1205 20:05:08.170100   72782 round_trippers.go:580]     Audit-Id: 3f6b2bb0-aa7e-40d0-b544-de94917179a4
	I1205 20:05:08.170107   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:08.170113   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:08.170122   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:08.170129   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:08.170345   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:08.667446   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:08.667469   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:08.667478   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:08.667486   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:08.670033   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:08.670059   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:08.670068   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:08.670075   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:08.670082   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:08 GMT
	I1205 20:05:08.670089   72782 round_trippers.go:580]     Audit-Id: 073dfecc-535b-4f72-99a9-ce250cfda187
	I1205 20:05:08.670095   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:08.670105   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:08.670392   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:09.166833   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:09.166857   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:09.166867   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:09.166874   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:09.169310   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:09.169329   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:09.169338   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:09 GMT
	I1205 20:05:09.169344   72782 round_trippers.go:580]     Audit-Id: 250f29ed-ce6c-4d94-81c3-b5d40ae5893c
	I1205 20:05:09.169351   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:09.169357   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:09.169363   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:09.169369   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:09.169522   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:09.667373   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:09.667395   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:09.667404   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:09.667411   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:09.669844   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:09.669913   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:09.669935   72782 round_trippers.go:580]     Audit-Id: ac8a4620-1e20-4541-9101-13e324188c90
	I1205 20:05:09.669957   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:09.669987   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:09.670009   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:09.670028   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:09.670048   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:09 GMT
	I1205 20:05:09.670176   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:10.167787   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:10.167814   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:10.167825   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:10.167832   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:10.170906   72782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:05:10.170930   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:10.170938   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:10 GMT
	I1205 20:05:10.170945   72782 round_trippers.go:580]     Audit-Id: 32dfc362-d629-4684-aaba-17abe76aa884
	I1205 20:05:10.170952   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:10.170958   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:10.170964   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:10.170974   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:10.171123   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:10.171516   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:10.667775   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:10.667814   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:10.667823   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:10.667830   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:10.670174   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:10.670193   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:10.670201   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:10.670207   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:10.670213   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:10.670219   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:10.670225   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:10 GMT
	I1205 20:05:10.670231   72782 round_trippers.go:580]     Audit-Id: c1c897d2-4adc-4eb2-b8a9-537e88fd74c9
	I1205 20:05:10.670336   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:11.167303   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:11.167331   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:11.167341   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:11.167364   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:11.169813   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:11.169836   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:11.169845   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:11.169852   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:11.169858   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:11 GMT
	I1205 20:05:11.169864   72782 round_trippers.go:580]     Audit-Id: aac4c858-5df1-4489-837d-2410c7a5409f
	I1205 20:05:11.169875   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:11.169885   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:11.170160   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"514","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1205 20:05:11.666760   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:11.666786   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:11.666800   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:11.666813   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:11.669288   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:11.669311   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:11.669319   72782 round_trippers.go:580]     Audit-Id: 65c3a352-4275-4eda-bcfd-12b827db77df
	I1205 20:05:11.669326   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:11.669332   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:11.669339   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:11.669346   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:11.669352   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:11 GMT
	I1205 20:05:11.669648   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:12.167773   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:12.167796   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:12.167806   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:12.167813   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:12.170389   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:12.170450   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:12.170473   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:12 GMT
	I1205 20:05:12.170492   72782 round_trippers.go:580]     Audit-Id: 8f11fd41-8a47-4ace-8be1-7719d0e7c4a2
	I1205 20:05:12.170530   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:12.170553   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:12.170571   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:12.170579   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:12.170763   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:12.666771   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:12.666811   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:12.666825   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:12.666833   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:12.669340   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:12.669361   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:12.669370   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:12.669376   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:12.669397   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:12.669408   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:12 GMT
	I1205 20:05:12.669415   72782 round_trippers.go:580]     Audit-Id: 4c4f1a36-1ade-41c2-9cec-146e03b396de
	I1205 20:05:12.669426   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:12.669599   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:12.670009   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:13.167148   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:13.167171   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:13.167181   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:13.167188   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:13.169881   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:13.169904   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:13.169913   72782 round_trippers.go:580]     Audit-Id: 92e394b1-ac27-4b6e-a048-27cada1c98fe
	I1205 20:05:13.169919   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:13.169926   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:13.169932   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:13.169938   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:13.169945   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:13 GMT
	I1205 20:05:13.170099   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:13.667299   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:13.667327   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:13.667338   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:13.667361   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:13.669933   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:13.669955   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:13.669963   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:13.669970   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:13.669977   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:13.669983   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:13.669990   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:13 GMT
	I1205 20:05:13.669996   72782 round_trippers.go:580]     Audit-Id: f7f7bd45-a184-4fae-8024-e2eb20b328d6
	I1205 20:05:13.670112   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:14.167426   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:14.167467   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:14.167477   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:14.167485   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:14.169939   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:14.169972   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:14.169984   72782 round_trippers.go:580]     Audit-Id: c288cc64-e86e-427c-bee8-802e5d42735e
	I1205 20:05:14.169991   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:14.169997   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:14.170004   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:14.170010   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:14.170017   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:14 GMT
	I1205 20:05:14.170279   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:14.667369   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:14.667395   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:14.667405   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:14.667413   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:14.669830   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:14.669851   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:14.669858   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:14.669865   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:14.669871   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:14.669877   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:14 GMT
	I1205 20:05:14.669884   72782 round_trippers.go:580]     Audit-Id: 9e67f46b-3553-43ce-acd9-3dddf57b2e95
	I1205 20:05:14.669890   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:14.670015   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:14.670396   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:15.166832   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:15.166852   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:15.166862   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:15.166870   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:15.169517   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:15.169543   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:15.169551   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:15.169558   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:15.169565   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:15.169571   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:15.169579   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:15 GMT
	I1205 20:05:15.169585   72782 round_trippers.go:580]     Audit-Id: aedee4bf-b8e0-4b07-b20c-e0a2646147e0
	I1205 20:05:15.169919   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:15.667244   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:15.667268   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:15.667277   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:15.667285   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:15.669703   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:15.669720   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:15.669728   72782 round_trippers.go:580]     Audit-Id: d9b7449a-e1cc-4b2b-942a-4838fd9b7c57
	I1205 20:05:15.669734   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:15.669740   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:15.669747   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:15.669753   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:15.669759   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:15 GMT
	I1205 20:05:15.669883   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:16.166933   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:16.166952   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:16.166962   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:16.166971   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:16.169038   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:16.169054   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:16.169063   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:16.169070   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:16.169076   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:16 GMT
	I1205 20:05:16.169083   72782 round_trippers.go:580]     Audit-Id: 5be11bf2-9097-4eaf-afc9-64e61117e39f
	I1205 20:05:16.169089   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:16.169095   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:16.169213   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:16.667370   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:16.667392   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:16.667402   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:16.667409   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:16.669784   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:16.669810   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:16.669820   72782 round_trippers.go:580]     Audit-Id: 55483693-923b-4e8f-97e3-af13b055cb80
	I1205 20:05:16.669826   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:16.669832   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:16.669839   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:16.669856   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:16.669864   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:16 GMT
	I1205 20:05:16.669973   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:17.167070   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:17.167097   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:17.167107   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:17.167114   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:17.169371   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:17.169393   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:17.169401   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:17.169408   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:17 GMT
	I1205 20:05:17.169414   72782 round_trippers.go:580]     Audit-Id: b63c5b92-02dc-47db-8812-99640d30dc06
	I1205 20:05:17.169421   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:17.169431   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:17.169438   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:17.169743   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:17.170145   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:17.667065   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:17.667088   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:17.667097   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:17.667105   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:17.669541   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:17.669559   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:17.669567   72782 round_trippers.go:580]     Audit-Id: 0239957f-5659-436e-8b5f-3ed373efb8f4
	I1205 20:05:17.669573   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:17.669579   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:17.669586   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:17.669592   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:17.669600   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:17 GMT
	I1205 20:05:17.669710   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:18.167381   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:18.167402   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:18.167411   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:18.167418   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:18.169962   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:18.169983   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:18.169991   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:18.169998   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:18.170005   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:18 GMT
	I1205 20:05:18.170011   72782 round_trippers.go:580]     Audit-Id: dd56ea41-cdc2-4617-8067-ca0158795c79
	I1205 20:05:18.170017   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:18.170024   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:18.170379   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:18.666961   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:18.666992   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:18.667002   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:18.667010   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:18.669506   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:18.669570   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:18.669581   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:18.669588   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:18.669595   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:18 GMT
	I1205 20:05:18.669601   72782 round_trippers.go:580]     Audit-Id: 0f9fdf66-0266-48c8-a841-53560e993674
	I1205 20:05:18.669607   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:18.669613   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:18.669747   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:19.167394   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:19.167417   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:19.167427   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:19.167434   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:19.169733   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:19.169756   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:19.169764   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:19 GMT
	I1205 20:05:19.169771   72782 round_trippers.go:580]     Audit-Id: 58d961dc-2e35-464d-b723-5440c0e9d2a3
	I1205 20:05:19.169777   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:19.169784   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:19.169790   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:19.169797   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:19.170080   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:19.170467   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:19.667123   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:19.667144   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:19.667154   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:19.667161   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:19.669578   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:19.669597   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:19.669606   72782 round_trippers.go:580]     Audit-Id: eec6fd9b-fca5-4fd3-a7d1-044bcdcbf5b0
	I1205 20:05:19.669613   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:19.669619   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:19.669625   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:19.669631   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:19.669639   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:19 GMT
	I1205 20:05:19.669810   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:20.167472   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:20.167496   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:20.167505   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:20.167515   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:20.170090   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:20.170112   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:20.170120   72782 round_trippers.go:580]     Audit-Id: 11df2003-9ee6-4b56-99d2-82961cccf6f8
	I1205 20:05:20.170126   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:20.170133   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:20.170139   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:20.170145   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:20.170151   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:20 GMT
	I1205 20:05:20.170339   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:20.667477   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:20.667499   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:20.667509   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:20.667516   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:20.669875   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:20.669894   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:20.669902   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:20 GMT
	I1205 20:05:20.669908   72782 round_trippers.go:580]     Audit-Id: 7d57481a-6d82-4fbd-a163-bccba5a6966e
	I1205 20:05:20.669914   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:20.669921   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:20.669927   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:20.669933   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:20.670102   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:21.167334   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:21.167357   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:21.167367   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:21.167375   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:21.169816   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:21.169841   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:21.169849   72782 round_trippers.go:580]     Audit-Id: 22b8a271-8e8e-46e0-836a-78027eda354c
	I1205 20:05:21.169856   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:21.169862   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:21.169868   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:21.169875   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:21.169882   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:21 GMT
	I1205 20:05:21.170037   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:21.667704   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:21.667727   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:21.667737   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:21.667745   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:21.670221   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:21.670240   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:21.670248   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:21.670255   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:21.670261   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:21 GMT
	I1205 20:05:21.670267   72782 round_trippers.go:580]     Audit-Id: 96bd5882-45c7-451b-9a03-37abb0910f2e
	I1205 20:05:21.670274   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:21.670280   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:21.670396   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:21.670787   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:22.167488   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:22.167508   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:22.167518   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:22.167525   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:22.169888   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:22.169911   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:22.169920   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:22.169927   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:22 GMT
	I1205 20:05:22.169933   72782 round_trippers.go:580]     Audit-Id: a36e6ad3-7d53-4ac7-b0f8-55556729a3d7
	I1205 20:05:22.169940   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:22.169951   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:22.169957   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:22.170185   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:22.667211   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:22.667233   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:22.667243   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:22.667250   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:22.669702   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:22.669723   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:22.669731   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:22.669738   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:22.669745   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:22 GMT
	I1205 20:05:22.669751   72782 round_trippers.go:580]     Audit-Id: 295892d3-fdfc-491d-b230-cec571302eab
	I1205 20:05:22.669757   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:22.669763   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:22.669938   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:23.167558   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:23.167580   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:23.167590   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:23.167597   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:23.169888   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:23.169906   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:23.169915   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:23.169923   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:23.169930   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:23 GMT
	I1205 20:05:23.169936   72782 round_trippers.go:580]     Audit-Id: 05446fd5-6266-4ae1-809b-cc46ad443065
	I1205 20:05:23.169942   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:23.169948   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:23.170075   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:23.666861   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:23.666884   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:23.666894   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:23.666902   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:23.669358   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:23.669382   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:23.669391   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:23.669398   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:23 GMT
	I1205 20:05:23.669404   72782 round_trippers.go:580]     Audit-Id: bbf8b58e-70b9-4722-a308-8449c7f4861f
	I1205 20:05:23.669411   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:23.669417   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:23.669424   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:23.669550   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:24.167787   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:24.167814   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:24.167824   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:24.167832   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:24.170203   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:24.170230   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:24.170239   72782 round_trippers.go:580]     Audit-Id: c41c843d-401f-42ae-81d8-d681497641be
	I1205 20:05:24.170246   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:24.170253   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:24.170262   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:24.170271   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:24.170279   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:24 GMT
	I1205 20:05:24.170621   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:24.171025   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:24.666839   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:24.666859   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:24.666868   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:24.666875   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:24.669262   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:24.669283   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:24.669292   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:24.669298   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:24.669305   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:24.669311   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:24 GMT
	I1205 20:05:24.669323   72782 round_trippers.go:580]     Audit-Id: 8a612967-2739-4f57-b955-1f54ae12fa77
	I1205 20:05:24.669330   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:24.669566   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:25.167507   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:25.167548   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:25.167558   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:25.167565   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:25.169964   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:25.169982   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:25.169991   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:25.169998   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:25 GMT
	I1205 20:05:25.170004   72782 round_trippers.go:580]     Audit-Id: c9ddea13-99b8-43b0-ae06-4d78d05a81e3
	I1205 20:05:25.170011   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:25.170016   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:25.170023   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:25.170176   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:25.667587   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:25.667614   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:25.667623   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:25.667630   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:25.670036   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:25.670055   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:25.670063   72782 round_trippers.go:580]     Audit-Id: 768cec98-54d9-4bca-ac9f-7ed262bbb46f
	I1205 20:05:25.670069   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:25.670075   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:25.670081   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:25.670087   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:25.670094   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:25 GMT
	I1205 20:05:25.670214   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:26.166843   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:26.166865   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:26.166875   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:26.166882   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:26.169338   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:26.169363   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:26.169371   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:26.169378   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:26 GMT
	I1205 20:05:26.169384   72782 round_trippers.go:580]     Audit-Id: 84acb5e7-7208-4c45-8972-133a9d6acf29
	I1205 20:05:26.169391   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:26.169397   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:26.169403   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:26.169524   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:26.667679   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:26.667706   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:26.667716   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:26.667724   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:26.670207   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:26.670229   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:26.670239   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:26.670247   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:26 GMT
	I1205 20:05:26.670254   72782 round_trippers.go:580]     Audit-Id: d9b780d9-2387-4a4a-888a-e22c2e45d446
	I1205 20:05:26.670260   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:26.670267   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:26.670277   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:26.670404   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:26.670823   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:27.167545   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:27.167567   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:27.167577   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:27.167584   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:27.169964   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:27.169982   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:27.169990   72782 round_trippers.go:580]     Audit-Id: 093245b7-e38a-4c0b-aeb3-4700093113c8
	I1205 20:05:27.169997   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:27.170005   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:27.170011   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:27.170017   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:27.170024   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:27 GMT
	I1205 20:05:27.170179   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:27.667308   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:27.667331   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:27.667340   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:27.667348   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:27.669842   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:27.669867   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:27.669875   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:27.669882   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:27.669888   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:27.669895   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:27 GMT
	I1205 20:05:27.669912   72782 round_trippers.go:580]     Audit-Id: 5fd67dc9-745b-404d-b3cf-0d4e103d620c
	I1205 20:05:27.669926   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:27.670068   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:28.166823   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:28.166852   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:28.166862   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:28.166869   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:28.169365   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:28.169386   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:28.169394   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:28 GMT
	I1205 20:05:28.169401   72782 round_trippers.go:580]     Audit-Id: f8fbb3de-21d6-484b-b1be-23fabf164c88
	I1205 20:05:28.169407   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:28.169413   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:28.169424   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:28.169430   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:28.169640   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:28.666890   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:28.666915   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:28.666925   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:28.666933   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:28.669338   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:28.669357   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:28.669364   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:28.669372   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:28.669378   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:28.669385   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:28.669391   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:28 GMT
	I1205 20:05:28.669397   72782 round_trippers.go:580]     Audit-Id: a36f1383-39ca-4ddb-a704-0798f84cdaa5
	I1205 20:05:28.669566   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:29.167689   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:29.167713   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:29.167723   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:29.167730   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:29.170943   72782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:05:29.170964   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:29.170972   72782 round_trippers.go:580]     Audit-Id: 0f168353-e409-4316-9da7-a7a7da0d969f
	I1205 20:05:29.170979   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:29.170985   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:29.170992   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:29.171002   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:29.171022   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:29 GMT
	I1205 20:05:29.171207   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:29.171593   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:29.666845   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:29.666870   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:29.666879   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:29.666887   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:29.669242   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:29.669260   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:29.669269   72782 round_trippers.go:580]     Audit-Id: 32e50b79-9d5a-4f6d-9f5e-0ec12de2a89a
	I1205 20:05:29.669275   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:29.669281   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:29.669287   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:29.669294   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:29.669300   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:29 GMT
	I1205 20:05:29.669437   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:30.166843   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:30.166868   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:30.166878   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:30.166886   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:30.169322   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:30.169345   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:30.169353   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:30.169360   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:30.169366   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:30.169373   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:30 GMT
	I1205 20:05:30.169379   72782 round_trippers.go:580]     Audit-Id: a3025956-84ce-4bd5-9760-2d552ae1aec8
	I1205 20:05:30.169386   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:30.169551   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:30.667114   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:30.667140   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:30.667150   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:30.667157   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:30.669755   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:30.669780   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:30.669813   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:30.669821   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:30.669828   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:30.669840   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:30 GMT
	I1205 20:05:30.669848   72782 round_trippers.go:580]     Audit-Id: ddcfcdd8-07cb-406c-ba53-578a9bc68392
	I1205 20:05:30.669855   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:30.669983   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:31.167732   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:31.167773   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:31.167784   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:31.167792   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:31.170275   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:31.170314   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:31.170324   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:31 GMT
	I1205 20:05:31.170330   72782 round_trippers.go:580]     Audit-Id: 95bbb43c-1a4e-44e8-b12e-6cee3570e164
	I1205 20:05:31.170337   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:31.170343   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:31.170353   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:31.170360   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:31.170706   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:31.667072   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:31.667096   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:31.667106   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:31.667113   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:31.669585   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:31.669604   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:31.669613   72782 round_trippers.go:580]     Audit-Id: cd181d66-7853-4477-9b69-b8c4feeb6a90
	I1205 20:05:31.669619   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:31.669625   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:31.669631   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:31.669638   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:31.669644   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:31 GMT
	I1205 20:05:31.669843   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:31.670267   72782 node_ready.go:58] node "multinode-930892-m02" has status "Ready":"False"
	I1205 20:05:32.167587   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:32.167610   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:32.167621   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:32.167628   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:32.170588   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:32.170612   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:32.170620   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:32.170627   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:32.170633   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:32 GMT
	I1205 20:05:32.170640   72782 round_trippers.go:580]     Audit-Id: 8ff8f7b2-7585-4649-a451-a07854d4b3ea
	I1205 20:05:32.170650   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:32.170656   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:32.171343   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"521","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I1205 20:05:32.666940   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:32.666966   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:32.666976   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:32.666984   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:32.669332   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:32.669353   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:32.669361   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:32.669367   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:32.669374   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:32.669380   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:32 GMT
	I1205 20:05:32.669387   72782 round_trippers.go:580]     Audit-Id: a8f12bc8-37af-49ab-99d7-185837ce4658
	I1205 20:05:32.669397   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:32.669658   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"545","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I1205 20:05:32.670057   72782 node_ready.go:49] node "multinode-930892-m02" has status "Ready":"True"
	I1205 20:05:32.670077   72782 node_ready.go:38] duration metric: took 31.009707411s waiting for node "multinode-930892-m02" to be "Ready" ...
	I1205 20:05:32.670088   72782 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:05:32.670154   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1205 20:05:32.670166   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:32.670173   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:32.670180   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:32.673648   72782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:05:32.673667   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:32.673680   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:32.673687   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:32 GMT
	I1205 20:05:32.673693   72782 round_trippers.go:580]     Audit-Id: 142b4b25-a368-42b3-b20f-edb73ef75676
	I1205 20:05:32.673702   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:32.673711   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:32.673718   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:32.674549   72782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"545"},"items":[{"metadata":{"name":"coredns-5dd5756b68-jg6xb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"68a13ae5-1cba-4475-b33a-8090d3001eae","resourceVersion":"456","creationTimestamp":"2023-12-05T20:04:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9044bb53-e854-441b-a046-ca23be2eacc5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9044bb53-e854-441b-a046-ca23be2eacc5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1205 20:05:32.677413   72782 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jg6xb" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:32.677485   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jg6xb
	I1205 20:05:32.677496   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:32.677504   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:32.677511   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:32.679607   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:32.679625   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:32.679633   72782 round_trippers.go:580]     Audit-Id: 23ae438d-4c36-4e57-8d73-873dd455bfa8
	I1205 20:05:32.679640   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:32.679646   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:32.679652   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:32.679670   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:32.679680   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:32 GMT
	I1205 20:05:32.679832   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-jg6xb","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"68a13ae5-1cba-4475-b33a-8090d3001eae","resourceVersion":"456","creationTimestamp":"2023-12-05T20:04:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"9044bb53-e854-441b-a046-ca23be2eacc5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9044bb53-e854-441b-a046-ca23be2eacc5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1205 20:05:32.680272   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:05:32.680288   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:32.680296   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:32.680303   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:32.682326   72782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:05:32.682343   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:32.682350   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:32.682356   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:32 GMT
	I1205 20:05:32.682363   72782 round_trippers.go:580]     Audit-Id: a6df3a54-77f2-44aa-8b96-5bf67d0139ba
	I1205 20:05:32.682369   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:32.682375   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:32.682385   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:32.682563   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:05:32.682923   72782 pod_ready.go:92] pod "coredns-5dd5756b68-jg6xb" in "kube-system" namespace has status "Ready":"True"
	I1205 20:05:32.682937   72782 pod_ready.go:81] duration metric: took 5.502598ms waiting for pod "coredns-5dd5756b68-jg6xb" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:32.682947   72782 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:32.682993   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-930892
	I1205 20:05:32.683003   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:32.683010   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:32.683017   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:32.684878   72782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:05:32.684898   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:32.684915   72782 round_trippers.go:580]     Audit-Id: 1c801592-c5b0-466f-a392-0754f3227fec
	I1205 20:05:32.684925   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:32.684938   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:32.684945   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:32.684954   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:32.684961   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:32 GMT
	I1205 20:05:32.685241   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-930892","namespace":"kube-system","uid":"610946f2-2a5c-4e9c-8bee-127cca42502c","resourceVersion":"424","creationTimestamp":"2023-12-05T20:04:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"3abdfcff53e2d32d8b1b2cebb83c49c3","kubernetes.io/config.mirror":"3abdfcff53e2d32d8b1b2cebb83c49c3","kubernetes.io/config.seen":"2023-12-05T20:04:00.077941695Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1205 20:05:32.685627   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:05:32.685642   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:32.685652   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:32.685664   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:32.687531   72782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:05:32.687550   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:32.687569   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:32.687581   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:32.687591   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:32.687598   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:32 GMT
	I1205 20:05:32.687609   72782 round_trippers.go:580]     Audit-Id: fc68cc19-c69e-4d27-8def-f9d96cf9b5a3
	I1205 20:05:32.687616   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:32.687923   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:05:32.688286   72782 pod_ready.go:92] pod "etcd-multinode-930892" in "kube-system" namespace has status "Ready":"True"
	I1205 20:05:32.688300   72782 pod_ready.go:81] duration metric: took 5.346928ms waiting for pod "etcd-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:32.688315   72782 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:32.688380   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-930892
	I1205 20:05:32.688390   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:32.688398   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:32.688405   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:32.690370   72782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:05:32.690391   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:32.690413   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:32.690426   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:32.690432   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:32 GMT
	I1205 20:05:32.690439   72782 round_trippers.go:580]     Audit-Id: ab820826-eabd-4b53-a1bc-69f9376e09a0
	I1205 20:05:32.690449   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:32.690455   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:32.690573   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-930892","namespace":"kube-system","uid":"ff4b2f9f-04b3-4c77-abdd-ed293fe3336d","resourceVersion":"425","creationTimestamp":"2023-12-05T20:04:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b526022c473bec524d839dcb362d3da6","kubernetes.io/config.mirror":"b526022c473bec524d839dcb362d3da6","kubernetes.io/config.seen":"2023-12-05T20:04:00.077933424Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1205 20:05:32.691040   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:05:32.691055   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:32.691062   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:32.691069   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:32.692944   72782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:05:32.692963   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:32.692970   72782 round_trippers.go:580]     Audit-Id: 2ed6ad67-073a-449f-89ad-53a920015602
	I1205 20:05:32.692977   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:32.692983   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:32.692990   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:32.692999   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:32.693006   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:32 GMT
	I1205 20:05:32.693181   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:05:32.693557   72782 pod_ready.go:92] pod "kube-apiserver-multinode-930892" in "kube-system" namespace has status "Ready":"True"
	I1205 20:05:32.693571   72782 pod_ready.go:81] duration metric: took 5.243584ms waiting for pod "kube-apiserver-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:32.693580   72782 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:32.693627   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-930892
	I1205 20:05:32.693639   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:32.693647   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:32.693654   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:32.695654   72782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:05:32.695673   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:32.695681   72782 round_trippers.go:580]     Audit-Id: dfab78ea-9c11-4f2a-8621-c410189e7dd3
	I1205 20:05:32.695688   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:32.695694   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:32.695700   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:32.695707   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:32.695715   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:32 GMT
	I1205 20:05:32.695950   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-930892","namespace":"kube-system","uid":"bf7a9066-c8ab-4c6e-b0cd-970b69612e10","resourceVersion":"426","creationTimestamp":"2023-12-05T20:04:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"140869da40b493f6e05a96a4f7fbfe02","kubernetes.io/config.mirror":"140869da40b493f6e05a96a4f7fbfe02","kubernetes.io/config.seen":"2023-12-05T20:04:00.077939233Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1205 20:05:32.696420   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:05:32.696437   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:32.696445   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:32.696452   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:32.698337   72782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:05:32.698367   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:32.698376   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:32.698383   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:32 GMT
	I1205 20:05:32.698392   72782 round_trippers.go:580]     Audit-Id: aaf2fb65-e86b-468f-87d5-87b5ec6fc0a2
	I1205 20:05:32.698405   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:32.698411   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:32.698422   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:32.698702   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:05:32.699120   72782 pod_ready.go:92] pod "kube-controller-manager-multinode-930892" in "kube-system" namespace has status "Ready":"True"
	I1205 20:05:32.699139   72782 pod_ready.go:81] duration metric: took 5.551706ms waiting for pod "kube-controller-manager-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:32.699151   72782 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6w78n" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:32.867506   72782 request.go:629] Waited for 168.295273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6w78n
	I1205 20:05:32.867562   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6w78n
	I1205 20:05:32.867571   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:32.867582   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:32.867591   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:32.870101   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:32.870125   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:32.870134   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:32.870140   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:32.870146   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:32 GMT
	I1205 20:05:32.870153   72782 round_trippers.go:580]     Audit-Id: ddc610c5-129c-49b9-adc3-1fd81b56294c
	I1205 20:05:32.870162   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:32.870169   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:32.870296   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6w78n","generateName":"kube-proxy-","namespace":"kube-system","uid":"03795216-bf59-4a7c-b038-4c1cd3662263","resourceVersion":"509","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5db642e7-1b4a-4211-a43b-b4b188b9f76b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5db642e7-1b4a-4211-a43b-b4b188b9f76b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1205 20:05:33.066999   72782 request.go:629] Waited for 196.244007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:33.067054   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892-m02
	I1205 20:05:33.067064   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:33.067073   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:33.067085   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:33.069501   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:33.069561   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:33.069581   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:33.069601   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:33.069634   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:33.069657   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:33 GMT
	I1205 20:05:33.069676   72782 round_trippers.go:580]     Audit-Id: e7524e19-d983-4716-be59-b94ff149c2da
	I1205 20:05:33.069695   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:33.069861   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892-m02","uid":"f7eeecca-0439-4a39-b143-6557cd826b41","resourceVersion":"545","creationTimestamp":"2023-12-05T20:05:00Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_05_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I1205 20:05:33.070318   72782 pod_ready.go:92] pod "kube-proxy-6w78n" in "kube-system" namespace has status "Ready":"True"
	I1205 20:05:33.070339   72782 pod_ready.go:81] duration metric: took 371.180842ms waiting for pod "kube-proxy-6w78n" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:33.070351   72782 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-skbnx" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:33.267736   72782 request.go:629] Waited for 197.327553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-skbnx
	I1205 20:05:33.267870   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-skbnx
	I1205 20:05:33.267885   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:33.267900   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:33.267909   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:33.270382   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:33.270449   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:33.270471   72782 round_trippers.go:580]     Audit-Id: 877722db-351b-4402-ba2f-33c50950e2e1
	I1205 20:05:33.270501   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:33.270534   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:33.270557   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:33.270569   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:33.270576   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:33 GMT
	I1205 20:05:33.270710   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-skbnx","generateName":"kube-proxy-","namespace":"kube-system","uid":"18565024-772b-429b-8d9b-77a81590210e","resourceVersion":"420","creationTimestamp":"2023-12-05T20:04:13Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5db642e7-1b4a-4211-a43b-b4b188b9f76b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5db642e7-1b4a-4211-a43b-b4b188b9f76b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1205 20:05:33.467517   72782 request.go:629] Waited for 196.318797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:05:33.467578   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:05:33.467588   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:33.467613   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:33.467625   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:33.470122   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:33.470185   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:33.470209   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:33 GMT
	I1205 20:05:33.470227   72782 round_trippers.go:580]     Audit-Id: 4cd7b232-195f-47fa-a470-ebc50cca78ff
	I1205 20:05:33.470262   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:33.470288   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:33.470308   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:33.470341   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:33.470495   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:05:33.470887   72782 pod_ready.go:92] pod "kube-proxy-skbnx" in "kube-system" namespace has status "Ready":"True"
	I1205 20:05:33.470902   72782 pod_ready.go:81] duration metric: took 400.545021ms waiting for pod "kube-proxy-skbnx" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:33.470914   72782 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:33.667335   72782 request.go:629] Waited for 196.339236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-930892
	I1205 20:05:33.667419   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-930892
	I1205 20:05:33.667431   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:33.667447   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:33.667458   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:33.670026   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:33.670093   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:33.670107   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:33.670115   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:33 GMT
	I1205 20:05:33.670136   72782 round_trippers.go:580]     Audit-Id: 49ca167e-6452-4521-b334-7e2c0ed54eb3
	I1205 20:05:33.670151   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:33.670157   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:33.670177   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:33.670305   72782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-930892","namespace":"kube-system","uid":"9e837e17-e45a-4631-92ba-602746f09a15","resourceVersion":"427","creationTimestamp":"2023-12-05T20:04:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fe34a8451b0f5ac84df3ae08c2adbedb","kubernetes.io/config.mirror":"fe34a8451b0f5ac84df3ae08c2adbedb","kubernetes.io/config.seen":"2023-12-05T20:04:00.077940382Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1205 20:05:33.867012   72782 request.go:629] Waited for 196.268368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:05:33.867087   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-930892
	I1205 20:05:33.867097   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:33.867123   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:33.867135   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:33.869632   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:33.869679   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:33.869688   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:33 GMT
	I1205 20:05:33.869694   72782 round_trippers.go:580]     Audit-Id: 07031a3f-94c7-4c6c-b1ad-9af31ac0698f
	I1205 20:05:33.869700   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:33.869706   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:33.869713   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:33.869719   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:33.869840   72782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:57Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1205 20:05:33.870231   72782 pod_ready.go:92] pod "kube-scheduler-multinode-930892" in "kube-system" namespace has status "Ready":"True"
	I1205 20:05:33.870254   72782 pod_ready.go:81] duration metric: took 399.328921ms waiting for pod "kube-scheduler-multinode-930892" in "kube-system" namespace to be "Ready" ...
	I1205 20:05:33.870270   72782 pod_ready.go:38] duration metric: took 1.200171889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:05:33.870287   72782 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:05:33.870348   72782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:05:33.883803   72782 system_svc.go:56] duration metric: took 13.506888ms WaitForService to wait for kubelet.
	I1205 20:05:33.883867   72782 kubeadm.go:581] duration metric: took 32.2426042s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:05:33.883901   72782 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:05:34.067429   72782 request.go:629] Waited for 183.446811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1205 20:05:34.067498   72782 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1205 20:05:34.067504   72782 round_trippers.go:469] Request Headers:
	I1205 20:05:34.067513   72782 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:05:34.067524   72782 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1205 20:05:34.069982   72782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:05:34.069999   72782 round_trippers.go:577] Response Headers:
	I1205 20:05:34.070007   72782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: adeb0538-278b-4d9c-b38c-4b9dda6251ea
	I1205 20:05:34.070053   72782 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:05:34 GMT
	I1205 20:05:34.070060   72782 round_trippers.go:580]     Audit-Id: 24910fb5-4c51-4943-8697-cb8746a0bbad
	I1205 20:05:34.070067   72782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:05:34.070079   72782 round_trippers.go:580]     Content-Type: application/json
	I1205 20:05:34.070085   72782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 205c9ae7-91da-4bf4-83aa-59f04d5d917b
	I1205 20:05:34.070343   72782 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"546"},"items":[{"metadata":{"name":"multinode-930892","uid":"e66782ea-2078-4e57-b4b7-8eea18615d0c","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:57Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-930892","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-930892","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_04_01_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12884 chars]
	I1205 20:05:34.070978   72782 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 20:05:34.070998   72782 node_conditions.go:123] node cpu capacity is 2
	I1205 20:05:34.071009   72782 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1205 20:05:34.071018   72782 node_conditions.go:123] node cpu capacity is 2
	I1205 20:05:34.071023   72782 node_conditions.go:105] duration metric: took 187.111287ms to run NodePressure ...
	I1205 20:05:34.071037   72782 start.go:228] waiting for startup goroutines ...
	I1205 20:05:34.071064   72782 start.go:242] writing updated cluster config ...
	I1205 20:05:34.071353   72782 ssh_runner.go:195] Run: rm -f paused
	I1205 20:05:34.129581   72782 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:05:34.132814   72782 out.go:177] * Done! kubectl is now configured to use "multinode-930892" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 05 20:04:45 multinode-930892 crio[902]: time="2023-12-05 20:04:45.915907243Z" level=info msg="Starting container: 63272267300c439fe45d547fb39642ebcdd93438ae0902bb902ebf0311392f54" id=233eeb91-046c-4ef5-b41d-f003ed76876a name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 20:04:45 multinode-930892 crio[902]: time="2023-12-05 20:04:45.918858712Z" level=info msg="Created container 4c4559f5f54187034837f27dd4ab4b84cf664d58941280f01a95880be26fe3fe: kube-system/coredns-5dd5756b68-jg6xb/coredns" id=c0533246-0eda-4318-9e0a-59bcc926e055 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 20:04:45 multinode-930892 crio[902]: time="2023-12-05 20:04:45.919550780Z" level=info msg="Starting container: 4c4559f5f54187034837f27dd4ab4b84cf664d58941280f01a95880be26fe3fe" id=bc11a304-8892-441c-baad-ef09dfd17a58 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 20:04:45 multinode-930892 crio[902]: time="2023-12-05 20:04:45.932865830Z" level=info msg="Started container" PID=1943 containerID=63272267300c439fe45d547fb39642ebcdd93438ae0902bb902ebf0311392f54 description=kube-system/storage-provisioner/storage-provisioner id=233eeb91-046c-4ef5-b41d-f003ed76876a name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ab6418f65f16b9ec367a46d590a2a9e3a6c0e79b28df6c5dd9fc53b773f147c
	Dec 05 20:04:45 multinode-930892 crio[902]: time="2023-12-05 20:04:45.933715635Z" level=info msg="Started container" PID=1949 containerID=4c4559f5f54187034837f27dd4ab4b84cf664d58941280f01a95880be26fe3fe description=kube-system/coredns-5dd5756b68-jg6xb/coredns id=bc11a304-8892-441c-baad-ef09dfd17a58 name=/runtime.v1.RuntimeService/StartContainer sandboxID=85b29892484dbe325320a77e7c5272fb84484828c3d3b02b57019f482aaf7b66
	Dec 05 20:05:35 multinode-930892 crio[902]: time="2023-12-05 20:05:35.950901156Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-ctbfn/POD" id=3c53a92b-727f-454d-88f0-a1f217bc127c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 20:05:35 multinode-930892 crio[902]: time="2023-12-05 20:05:35.950967010Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 20:05:35 multinode-930892 crio[902]: time="2023-12-05 20:05:35.967800829Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-ctbfn Namespace:default ID:c5ea53473852880b6f96ccc958ab3b3f37d6f5370b4d3beb88847814f0e790c2 UID:18a2229d-5cd8-4834-8647-6671a6890af7 NetNS:/var/run/netns/7192037e-0e9a-4601-99aa-8ac684d41b5a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 05 20:05:35 multinode-930892 crio[902]: time="2023-12-05 20:05:35.967842454Z" level=info msg="Adding pod default_busybox-5bc68d56bd-ctbfn to CNI network \"kindnet\" (type=ptp)"
	Dec 05 20:05:35 multinode-930892 crio[902]: time="2023-12-05 20:05:35.976712992Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-ctbfn Namespace:default ID:c5ea53473852880b6f96ccc958ab3b3f37d6f5370b4d3beb88847814f0e790c2 UID:18a2229d-5cd8-4834-8647-6671a6890af7 NetNS:/var/run/netns/7192037e-0e9a-4601-99aa-8ac684d41b5a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 05 20:05:35 multinode-930892 crio[902]: time="2023-12-05 20:05:35.976849018Z" level=info msg="Checking pod default_busybox-5bc68d56bd-ctbfn for CNI network kindnet (type=ptp)"
	Dec 05 20:05:35 multinode-930892 crio[902]: time="2023-12-05 20:05:35.993733200Z" level=info msg="Ran pod sandbox c5ea53473852880b6f96ccc958ab3b3f37d6f5370b4d3beb88847814f0e790c2 with infra container: default/busybox-5bc68d56bd-ctbfn/POD" id=3c53a92b-727f-454d-88f0-a1f217bc127c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 05 20:05:35 multinode-930892 crio[902]: time="2023-12-05 20:05:35.994713935Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=c899409f-78a5-4a27-b5f1-80be57f69dd7 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:05:35 multinode-930892 crio[902]: time="2023-12-05 20:05:35.994937921Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=c899409f-78a5-4a27-b5f1-80be57f69dd7 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:05:35 multinode-930892 crio[902]: time="2023-12-05 20:05:35.995929044Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=60b56cc4-6edc-4897-867b-e8639368e7db name=/runtime.v1.ImageService/PullImage
	Dec 05 20:05:35 multinode-930892 crio[902]: time="2023-12-05 20:05:35.997936537Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 05 20:05:36 multinode-930892 crio[902]: time="2023-12-05 20:05:36.521771992Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 05 20:05:37 multinode-930892 crio[902]: time="2023-12-05 20:05:37.813855790Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=60b56cc4-6edc-4897-867b-e8639368e7db name=/runtime.v1.ImageService/PullImage
	Dec 05 20:05:37 multinode-930892 crio[902]: time="2023-12-05 20:05:37.815931714Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=999e229c-a310-4132-9cf2-ce776b5b3d05 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:05:37 multinode-930892 crio[902]: time="2023-12-05 20:05:37.816718101Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=999e229c-a310-4132-9cf2-ce776b5b3d05 name=/runtime.v1.ImageService/ImageStatus
	Dec 05 20:05:37 multinode-930892 crio[902]: time="2023-12-05 20:05:37.817513054Z" level=info msg="Creating container: default/busybox-5bc68d56bd-ctbfn/busybox" id=fd066a26-e682-42eb-91a1-c70c934fba82 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 20:05:37 multinode-930892 crio[902]: time="2023-12-05 20:05:37.817619204Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 05 20:05:37 multinode-930892 crio[902]: time="2023-12-05 20:05:37.893899167Z" level=info msg="Created container 4e77e72e9b00f10ba6c78d5fcafcd4e265603fa03511bcf453080cabd0bcad31: default/busybox-5bc68d56bd-ctbfn/busybox" id=fd066a26-e682-42eb-91a1-c70c934fba82 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 05 20:05:37 multinode-930892 crio[902]: time="2023-12-05 20:05:37.894630505Z" level=info msg="Starting container: 4e77e72e9b00f10ba6c78d5fcafcd4e265603fa03511bcf453080cabd0bcad31" id=1e1222ee-c8e4-41c6-9a0d-c23a124c9430 name=/runtime.v1.RuntimeService/StartContainer
	Dec 05 20:05:37 multinode-930892 crio[902]: time="2023-12-05 20:05:37.906845730Z" level=info msg="Started container" PID=2097 containerID=4e77e72e9b00f10ba6c78d5fcafcd4e265603fa03511bcf453080cabd0bcad31 description=default/busybox-5bc68d56bd-ctbfn/busybox id=1e1222ee-c8e4-41c6-9a0d-c23a124c9430 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c5ea53473852880b6f96ccc958ab3b3f37d6f5370b4d3beb88847814f0e790c2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4e77e72e9b00f       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   6 seconds ago        Running             busybox                   0                   c5ea534738528       busybox-5bc68d56bd-ctbfn
	4c4559f5f5418       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      58 seconds ago       Running             coredns                   0                   85b29892484db       coredns-5dd5756b68-jg6xb
	63272267300c4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      58 seconds ago       Running             storage-provisioner       0                   5ab6418f65f16       storage-provisioner
	5197fa7caf386       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   0be2518471a04       kindnet-xtm24
	957b5091e7ed1       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      About a minute ago   Running             kube-proxy                0                   1c0c055c4007c       kube-proxy-skbnx
	02a97fd81026a       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   826de8ac590cc       kube-scheduler-multinode-930892
	dc2686f4ac2e0       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   58363d0bcd25a       kube-apiserver-multinode-930892
	39a30bd7ef7b1       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   dc9e516395bdc       etcd-multinode-930892
	6eb19e1756960       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   80ff011cba4ae       kube-controller-manager-multinode-930892
	
	* 
	* ==> coredns [4c4559f5f54187034837f27dd4ab4b84cf664d58941280f01a95880be26fe3fe] <==
	* [INFO] 10.244.0.3:53806 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153446s
	[INFO] 10.244.1.2:36720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114725s
	[INFO] 10.244.1.2:50083 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00099388s
	[INFO] 10.244.1.2:50926 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107742s
	[INFO] 10.244.1.2:37665 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057305s
	[INFO] 10.244.1.2:44058 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000758629s
	[INFO] 10.244.1.2:52134 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000080912s
	[INFO] 10.244.1.2:52291 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062458s
	[INFO] 10.244.1.2:53544 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079148s
	[INFO] 10.244.0.3:57540 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123374s
	[INFO] 10.244.0.3:32841 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006629s
	[INFO] 10.244.0.3:48785 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068144s
	[INFO] 10.244.0.3:36869 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064321s
	[INFO] 10.244.1.2:33489 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135632s
	[INFO] 10.244.1.2:51074 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007251s
	[INFO] 10.244.1.2:56123 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085219s
	[INFO] 10.244.1.2:53553 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060735s
	[INFO] 10.244.0.3:59700 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095574s
	[INFO] 10.244.0.3:39342 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131143s
	[INFO] 10.244.0.3:36305 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148735s
	[INFO] 10.244.0.3:42193 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010533s
	[INFO] 10.244.1.2:44528 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101704s
	[INFO] 10.244.1.2:41804 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000063762s
	[INFO] 10.244.1.2:51160 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068792s
	[INFO] 10.244.1.2:48007 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073223s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-930892
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-930892
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=multinode-930892
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_04_01_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:03:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-930892
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 20:05:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 20:04:45 +0000   Tue, 05 Dec 2023 20:03:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 20:04:45 +0000   Tue, 05 Dec 2023 20:03:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 20:04:45 +0000   Tue, 05 Dec 2023 20:03:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 20:04:45 +0000   Tue, 05 Dec 2023 20:04:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-930892
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc81742f151f4ee884bffd94ad09f924
	  System UUID:                17030d69-d370-4aac-a04f-714fdca1e3a7
	  Boot ID:                    ade55ee8-b6ef-4756-8af5-2453aa07c908
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-ctbfn                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 coredns-5dd5756b68-jg6xb                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     91s
	  kube-system                 etcd-multinode-930892                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 kindnet-xtm24                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      91s
	  kube-system                 kube-apiserver-multinode-930892             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-controller-manager-multinode-930892    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-skbnx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-multinode-930892             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s (x8 over 112s)  kubelet          Node multinode-930892 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x8 over 112s)  kubelet          Node multinode-930892 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x8 over 112s)  kubelet          Node multinode-930892 status is now: NodeHasSufficientPID
	  Normal  Starting                 104s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s                 kubelet          Node multinode-930892 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s                 kubelet          Node multinode-930892 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s                 kubelet          Node multinode-930892 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s                  node-controller  Node multinode-930892 event: Registered Node multinode-930892 in Controller
	  Normal  NodeReady                59s                  kubelet          Node multinode-930892 status is now: NodeReady
	
	
	Name:               multinode-930892-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-930892-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=multinode-930892
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_05T20_05_01_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:05:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-930892-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 20:05:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 20:05:32 +0000   Tue, 05 Dec 2023 20:05:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 20:05:32 +0000   Tue, 05 Dec 2023 20:05:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 20:05:32 +0000   Tue, 05 Dec 2023 20:05:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 20:05:32 +0000   Tue, 05 Dec 2023 20:05:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-930892-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 469e19246db3484b884349367ae253fd
	  System UUID:                09c80cd2-7ed9-434d-9dd8-455327545a25
	  Boot ID:                    ade55ee8-b6ef-4756-8af5-2453aa07c908
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-gg5q2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-mfcwg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      44s
	  kube-system                 kube-proxy-6w78n            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 42s                kube-proxy       
	  Normal  NodeHasSufficientMemory  44s (x5 over 45s)  kubelet          Node multinode-930892-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x5 over 45s)  kubelet          Node multinode-930892-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x5 over 45s)  kubelet          Node multinode-930892-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node multinode-930892-m02 event: Registered Node multinode-930892-m02 in Controller
	  Normal  NodeReady                12s                kubelet          Node multinode-930892-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000746] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000998] FS-Cache: N-cookie d=000000005352abd4{9p.inode} n=0000000092935cee
	[  +0.001126] FS-Cache: N-key=[8] '7d6ced0000000000'
	[  +0.003128] FS-Cache: Duplicate cookie detected
	[  +0.000743] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001056] FS-Cache: O-cookie d=000000005352abd4{9p.inode} n=0000000096f14bef
	[  +0.001106] FS-Cache: O-key=[8] '7d6ced0000000000'
	[  +0.000771] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000986] FS-Cache: N-cookie d=000000005352abd4{9p.inode} n=0000000087309102
	[  +0.001095] FS-Cache: N-key=[8] '7d6ced0000000000'
	[  +2.579818] FS-Cache: Duplicate cookie detected
	[  +0.000741] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001012] FS-Cache: O-cookie d=000000005352abd4{9p.inode} n=00000000c814e0ec
	[  +0.001118] FS-Cache: O-key=[8] '7c6ced0000000000'
	[  +0.000756] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=000000005352abd4{9p.inode} n=0000000095aba95a
	[  +0.001110] FS-Cache: N-key=[8] '7c6ced0000000000'
	[  +0.433599] FS-Cache: Duplicate cookie detected
	[  +0.000720] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001003] FS-Cache: O-cookie d=000000005352abd4{9p.inode} n=00000000851860cc
	[  +0.001106] FS-Cache: O-key=[8] '876ced0000000000'
	[  +0.000728] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=000000005352abd4{9p.inode} n=0000000092935cee
	[  +0.001081] FS-Cache: N-key=[8] '876ced0000000000'
	[Dec 5 19:52] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [39a30bd7ef7b1abcdf40756b788280c5c00b1f256a463011303a21370741ca4e] <==
	* {"level":"info","ts":"2023-12-05T20:03:53.014585Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-05T20:03:53.014657Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-05T20:03:53.014708Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-05T20:03:53.014342Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-05T20:03:53.014364Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-05T20:03:53.020014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-12-05T20:03:53.020177Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-12-05T20:03:53.183832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-05T20:03:53.18395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-05T20:03:53.18399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-12-05T20:03:53.184042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-05T20:03:53.184073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-05T20:03:53.184123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-12-05T20:03:53.184155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-05T20:03:53.185342Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:03:53.187994Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-930892 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-05T20:03:53.188069Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:03:53.188974Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:03:53.189851Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:03:53.189929Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:03:53.189132Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:03:53.191993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-12-05T20:03:53.189454Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-05T20:03:53.202421Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-05T20:03:53.20251Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  20:05:44 up 48 min,  0 users,  load average: 1.17, 1.48, 1.20
	Linux multinode-930892 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [5197fa7caf386e8241ef4eb279c347aa965df9d6cd31feea6bb4e75d4c4b0d15] <==
	* I1205 20:04:14.863485       1 main.go:116] setting mtu 1500 for CNI 
	I1205 20:04:14.863517       1 main.go:146] kindnetd IP family: "ipv4"
	I1205 20:04:14.863533       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1205 20:04:45.056491       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1205 20:04:45.070475       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1205 20:04:45.070508       1 main.go:227] handling current node
	I1205 20:04:55.086525       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1205 20:04:55.087108       1 main.go:227] handling current node
	I1205 20:05:05.099595       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1205 20:05:05.099624       1 main.go:227] handling current node
	I1205 20:05:05.099637       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1205 20:05:05.099643       1 main.go:250] Node multinode-930892-m02 has CIDR [10.244.1.0/24] 
	I1205 20:05:05.099826       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1205 20:05:15.112016       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1205 20:05:15.112046       1 main.go:227] handling current node
	I1205 20:05:15.112057       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1205 20:05:15.112062       1 main.go:250] Node multinode-930892-m02 has CIDR [10.244.1.0/24] 
	I1205 20:05:25.116909       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1205 20:05:25.116938       1 main.go:227] handling current node
	I1205 20:05:25.116948       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1205 20:05:25.116955       1 main.go:250] Node multinode-930892-m02 has CIDR [10.244.1.0/24] 
	I1205 20:05:35.130788       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1205 20:05:35.130815       1 main.go:227] handling current node
	I1205 20:05:35.130826       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1205 20:05:35.130839       1 main.go:250] Node multinode-930892-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [dc2686f4ac2e0467ecb372b012a95c933bd2b39d70cb2eaae196e5cd79e4c0d5] <==
	* I1205 20:03:57.139553       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1205 20:03:57.139678       1 aggregator.go:166] initial CRD sync complete...
	I1205 20:03:57.139691       1 autoregister_controller.go:141] Starting autoregister controller
	I1205 20:03:57.139696       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 20:03:57.139702       1 cache.go:39] Caches are synced for autoregister controller
	I1205 20:03:57.147618       1 controller.go:624] quota admission added evaluator for: namespaces
	E1205 20:03:57.175300       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1205 20:03:57.378514       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 20:03:57.867779       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1205 20:03:57.873613       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1205 20:03:57.873644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 20:03:58.353948       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 20:03:58.392165       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 20:03:58.482173       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1205 20:03:58.487662       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1205 20:03:58.488696       1 controller.go:624] quota admission added evaluator for: endpoints
	I1205 20:03:58.492942       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 20:03:58.959564       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1205 20:03:59.993808       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1205 20:04:00.007394       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 20:04:00.021757       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1205 20:04:13.837658       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1205 20:04:13.899814       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E1205 20:05:39.556988       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x4009459620), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400d1fa870), ResponseWriter:(*httpsnoop.rw)(0x400d1fa870), Flusher:(*httpsnoop.rw)(0x400d1fa870), CloseNotifier:(*httpsnoop.rw)(0x400d1fa870), Pusher:(*httpsnoop.rw)(0x400d1fa870)}}, encoder:(*versioning.codec)(0x400bde4c80), memAllocator:(*runtime.Allocator)(0x400d6c1908)})
	E1205 20:05:41.198995       1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:39354: write: broken pipe
	
	* 
	* ==> kube-controller-manager [6eb19e1756960a57cb38c3d84a9275c90f0420b90ca7ba125a61610d5cf4750a] <==
	* I1205 20:04:14.364954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.961µs"
	I1205 20:04:45.440987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.484µs"
	I1205 20:04:45.462020       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.471µs"
	I1205 20:04:46.253573       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.473µs"
	I1205 20:04:46.298444       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.589691ms"
	I1205 20:04:46.299363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.448µs"
	I1205 20:04:48.086886       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1205 20:05:00.896740       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-930892-m02\" does not exist"
	I1205 20:05:00.923617       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6w78n"
	I1205 20:05:00.924274       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mfcwg"
	I1205 20:05:00.940004       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-930892-m02" podCIDRs=["10.244.1.0/24"]
	I1205 20:05:03.089259       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-930892-m02"
	I1205 20:05:03.089335       1 event.go:307] "Event occurred" object="multinode-930892-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-930892-m02 event: Registered Node multinode-930892-m02 in Controller"
	I1205 20:05:32.270775       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-930892-m02"
	I1205 20:05:34.991290       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1205 20:05:35.021542       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-gg5q2"
	I1205 20:05:35.036838       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-ctbfn"
	I1205 20:05:35.062159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="70.537404ms"
	I1205 20:05:35.073232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.01971ms"
	I1205 20:05:35.099468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.167253ms"
	I1205 20:05:35.099657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="72.986µs"
	I1205 20:05:38.348329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.44425ms"
	I1205 20:05:38.348705       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.242µs"
	I1205 20:05:39.548681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.850751ms"
	I1205 20:05:39.550356       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.876µs"
	
	* 
	* ==> kube-proxy [957b5091e7ed13548e5ffb8f719edcfa44c2a58b410ab847e0b8f53fa9b7895a] <==
	* I1205 20:04:14.979331       1 server_others.go:69] "Using iptables proxy"
	I1205 20:04:14.994231       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1205 20:04:15.036679       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1205 20:04:15.040323       1 server_others.go:152] "Using iptables Proxier"
	I1205 20:04:15.040443       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1205 20:04:15.040476       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1205 20:04:15.040536       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 20:04:15.041210       1 server.go:846] "Version info" version="v1.28.4"
	I1205 20:04:15.041782       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:04:15.048605       1 config.go:188] "Starting service config controller"
	I1205 20:04:15.052107       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 20:04:15.052213       1 config.go:97] "Starting endpoint slice config controller"
	I1205 20:04:15.052248       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 20:04:15.052816       1 config.go:315] "Starting node config controller"
	I1205 20:04:15.055057       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 20:04:15.158986       1 shared_informer.go:318] Caches are synced for node config
	I1205 20:04:15.161177       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 20:04:15.161263       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [02a97fd81026a5115d619f2056263a3990b0aeba119177071ceffffac973b364] <==
	* W1205 20:03:57.264426       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:03:57.264467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1205 20:03:57.264434       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1205 20:03:57.264551       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:03:57.264568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:03:57.264552       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1205 20:03:57.264619       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:03:57.264633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 20:03:57.264693       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:03:57.264732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1205 20:03:57.264737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 20:03:57.264797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 20:03:57.264816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 20:03:57.264859       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:03:57.264877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:03:57.264861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 20:03:57.264695       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:03:57.264898       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 20:03:57.264769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:03:57.264911       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1205 20:03:58.153411       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:03:58.153532       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:03:58.208575       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:03:58.208723       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1205 20:04:00.452884       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 05 20:04:13 multinode-930892 kubelet[1389]: I1205 20:04:13.987280    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jthjt\" (UniqueName: \"kubernetes.io/projected/8c6bc758-aa3f-4204-98bb-68c004cdc2a8-kube-api-access-jthjt\") pod \"kindnet-xtm24\" (UID: \"8c6bc758-aa3f-4204-98bb-68c004cdc2a8\") " pod="kube-system/kindnet-xtm24"
	Dec 05 20:04:13 multinode-930892 kubelet[1389]: I1205 20:04:13.987309    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzqxj\" (UniqueName: \"kubernetes.io/projected/18565024-772b-429b-8d9b-77a81590210e-kube-api-access-kzqxj\") pod \"kube-proxy-skbnx\" (UID: \"18565024-772b-429b-8d9b-77a81590210e\") " pod="kube-system/kube-proxy-skbnx"
	Dec 05 20:04:13 multinode-930892 kubelet[1389]: I1205 20:04:13.987332    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18565024-772b-429b-8d9b-77a81590210e-xtables-lock\") pod \"kube-proxy-skbnx\" (UID: \"18565024-772b-429b-8d9b-77a81590210e\") " pod="kube-system/kube-proxy-skbnx"
	Dec 05 20:04:13 multinode-930892 kubelet[1389]: I1205 20:04:13.987354    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18565024-772b-429b-8d9b-77a81590210e-lib-modules\") pod \"kube-proxy-skbnx\" (UID: \"18565024-772b-429b-8d9b-77a81590210e\") " pod="kube-system/kube-proxy-skbnx"
	Dec 05 20:04:13 multinode-930892 kubelet[1389]: I1205 20:04:13.987376    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8c6bc758-aa3f-4204-98bb-68c004cdc2a8-cni-cfg\") pod \"kindnet-xtm24\" (UID: \"8c6bc758-aa3f-4204-98bb-68c004cdc2a8\") " pod="kube-system/kindnet-xtm24"
	Dec 05 20:04:13 multinode-930892 kubelet[1389]: I1205 20:04:13.987398    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c6bc758-aa3f-4204-98bb-68c004cdc2a8-lib-modules\") pod \"kindnet-xtm24\" (UID: \"8c6bc758-aa3f-4204-98bb-68c004cdc2a8\") " pod="kube-system/kindnet-xtm24"
	Dec 05 20:04:13 multinode-930892 kubelet[1389]: I1205 20:04:13.987420    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/18565024-772b-429b-8d9b-77a81590210e-kube-proxy\") pod \"kube-proxy-skbnx\" (UID: \"18565024-772b-429b-8d9b-77a81590210e\") " pod="kube-system/kube-proxy-skbnx"
	Dec 05 20:04:15 multinode-930892 kubelet[1389]: I1205 20:04:15.224873    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-xtm24" podStartSLOduration=2.224829273 podCreationTimestamp="2023-12-05 20:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-05 20:04:15.202821631 +0000 UTC m=+15.231131682" watchObservedRunningTime="2023-12-05 20:04:15.224829273 +0000 UTC m=+15.253139332"
	Dec 05 20:04:20 multinode-930892 kubelet[1389]: I1205 20:04:20.150205    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-skbnx" podStartSLOduration=7.150160467 podCreationTimestamp="2023-12-05 20:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-05 20:04:15.22517289 +0000 UTC m=+15.253482941" watchObservedRunningTime="2023-12-05 20:04:20.150160467 +0000 UTC m=+20.178470526"
	Dec 05 20:04:45 multinode-930892 kubelet[1389]: I1205 20:04:45.410769    1389 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 05 20:04:45 multinode-930892 kubelet[1389]: I1205 20:04:45.435817    1389 topology_manager.go:215] "Topology Admit Handler" podUID="0177b9f4-828e-4903-acc7-d50fee28986c" podNamespace="kube-system" podName="storage-provisioner"
	Dec 05 20:04:45 multinode-930892 kubelet[1389]: I1205 20:04:45.439535    1389 topology_manager.go:215] "Topology Admit Handler" podUID="68a13ae5-1cba-4475-b33a-8090d3001eae" podNamespace="kube-system" podName="coredns-5dd5756b68-jg6xb"
	Dec 05 20:04:45 multinode-930892 kubelet[1389]: I1205 20:04:45.586266    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0177b9f4-828e-4903-acc7-d50fee28986c-tmp\") pod \"storage-provisioner\" (UID: \"0177b9f4-828e-4903-acc7-d50fee28986c\") " pod="kube-system/storage-provisioner"
	Dec 05 20:04:45 multinode-930892 kubelet[1389]: I1205 20:04:45.586317    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68a13ae5-1cba-4475-b33a-8090d3001eae-config-volume\") pod \"coredns-5dd5756b68-jg6xb\" (UID: \"68a13ae5-1cba-4475-b33a-8090d3001eae\") " pod="kube-system/coredns-5dd5756b68-jg6xb"
	Dec 05 20:04:45 multinode-930892 kubelet[1389]: I1205 20:04:45.586351    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbm8h\" (UniqueName: \"kubernetes.io/projected/68a13ae5-1cba-4475-b33a-8090d3001eae-kube-api-access-zbm8h\") pod \"coredns-5dd5756b68-jg6xb\" (UID: \"68a13ae5-1cba-4475-b33a-8090d3001eae\") " pod="kube-system/coredns-5dd5756b68-jg6xb"
	Dec 05 20:04:45 multinode-930892 kubelet[1389]: I1205 20:04:45.586374    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjh6j\" (UniqueName: \"kubernetes.io/projected/0177b9f4-828e-4903-acc7-d50fee28986c-kube-api-access-fjh6j\") pod \"storage-provisioner\" (UID: \"0177b9f4-828e-4903-acc7-d50fee28986c\") " pod="kube-system/storage-provisioner"
	Dec 05 20:04:45 multinode-930892 kubelet[1389]: W1205 20:04:45.791599    1389 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d5e6ffca9b1cdea5ea6e00c49ce2f376d8a49697a136a6f3830a6acb7f8f8841/crio-5ab6418f65f16b9ec367a46d590a2a9e3a6c0e79b28df6c5dd9fc53b773f147c WatchSource:0}: Error finding container 5ab6418f65f16b9ec367a46d590a2a9e3a6c0e79b28df6c5dd9fc53b773f147c: Status 404 returned error can't find the container with id 5ab6418f65f16b9ec367a46d590a2a9e3a6c0e79b28df6c5dd9fc53b773f147c
	Dec 05 20:04:45 multinode-930892 kubelet[1389]: W1205 20:04:45.808040    1389 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d5e6ffca9b1cdea5ea6e00c49ce2f376d8a49697a136a6f3830a6acb7f8f8841/crio-85b29892484dbe325320a77e7c5272fb84484828c3d3b02b57019f482aaf7b66 WatchSource:0}: Error finding container 85b29892484dbe325320a77e7c5272fb84484828c3d3b02b57019f482aaf7b66: Status 404 returned error can't find the container with id 85b29892484dbe325320a77e7c5272fb84484828c3d3b02b57019f482aaf7b66
	Dec 05 20:04:46 multinode-930892 kubelet[1389]: I1205 20:04:46.269073    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jg6xb" podStartSLOduration=33.269033295 podCreationTimestamp="2023-12-05 20:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-05 20:04:46.253710998 +0000 UTC m=+46.282021057" watchObservedRunningTime="2023-12-05 20:04:46.269033295 +0000 UTC m=+46.297343346"
	Dec 05 20:04:46 multinode-930892 kubelet[1389]: I1205 20:04:46.269412    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.269393019 podCreationTimestamp="2023-12-05 20:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-05 20:04:46.268848571 +0000 UTC m=+46.297158630" watchObservedRunningTime="2023-12-05 20:04:46.269393019 +0000 UTC m=+46.297703078"
	Dec 05 20:05:35 multinode-930892 kubelet[1389]: I1205 20:05:35.049479    1389 topology_manager.go:215] "Topology Admit Handler" podUID="18a2229d-5cd8-4834-8647-6671a6890af7" podNamespace="default" podName="busybox-5bc68d56bd-ctbfn"
	Dec 05 20:05:35 multinode-930892 kubelet[1389]: W1205 20:05:35.063086    1389 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-930892" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-930892' and this object
	Dec 05 20:05:35 multinode-930892 kubelet[1389]: E1205 20:05:35.063154    1389 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-930892" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-930892' and this object
	Dec 05 20:05:35 multinode-930892 kubelet[1389]: I1205 20:05:35.189383    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tx2t\" (UniqueName: \"kubernetes.io/projected/18a2229d-5cd8-4834-8647-6671a6890af7-kube-api-access-6tx2t\") pod \"busybox-5bc68d56bd-ctbfn\" (UID: \"18a2229d-5cd8-4834-8647-6671a6890af7\") " pod="default/busybox-5bc68d56bd-ctbfn"
	Dec 05 20:05:35 multinode-930892 kubelet[1389]: W1205 20:05:35.993589    1389 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d5e6ffca9b1cdea5ea6e00c49ce2f376d8a49697a136a6f3830a6acb7f8f8841/crio-c5ea53473852880b6f96ccc958ab3b3f37d6f5370b4d3beb88847814f0e790c2 WatchSource:0}: Error finding container c5ea53473852880b6f96ccc958ab3b3f37d6f5370b4d3beb88847814f0e790c2: Status 404 returned error can't find the container with id c5ea53473852880b6f96ccc958ab3b3f37d6f5370b4d3beb88847814f0e790c2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-930892 -n multinode-930892
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-930892 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.78967457.exe start -p running-upgrade-106879 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1205 20:21:15.519993    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.78967457.exe start -p running-upgrade-106879 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m4.795644716s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-106879 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-106879 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.771711976s)

                                                
                                                
-- stdout --
	* [running-upgrade-106879] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-106879 in cluster running-upgrade-106879
	* Pulling base image ...
	* Updating the running docker "running-upgrade-106879" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:22:01.983071  133413 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:22:01.983312  133413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:22:01.983340  133413 out.go:309] Setting ErrFile to fd 2...
	I1205 20:22:01.983361  133413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:22:01.983861  133413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	I1205 20:22:01.984516  133413 out.go:303] Setting JSON to false
	I1205 20:22:01.985802  133413 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3868,"bootTime":1701803854,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 20:22:01.985900  133413 start.go:138] virtualization:  
	I1205 20:22:01.989014  133413 out.go:177] * [running-upgrade-106879] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1205 20:22:01.991928  133413 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:22:01.994552  133413 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:22:01.992075  133413 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1205 20:22:01.992103  133413 notify.go:220] Checking for updates...
	I1205 20:22:01.996525  133413 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 20:22:01.998454  133413 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 20:22:02.001528  133413 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 20:22:02.003146  133413 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:22:02.005732  133413 config.go:182] Loaded profile config "running-upgrade-106879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1205 20:22:02.008262  133413 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1205 20:22:02.010258  133413 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:22:02.039897  133413 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 20:22:02.040017  133413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:22:02.142260  133413 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-05 20:22:02.131438121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 20:22:02.142411  133413 docker.go:295] overlay module found
	I1205 20:22:02.144728  133413 out.go:177] * Using the docker driver based on existing profile
	I1205 20:22:02.146633  133413 start.go:298] selected driver: docker
	I1205 20:22:02.146661  133413 start.go:902] validating driver "docker" against &{Name:running-upgrade-106879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-106879 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.128 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1205 20:22:02.146747  133413 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:22:02.147534  133413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:22:02.199626  133413 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1205 20:22:02.246160  133413 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-05 20:22:02.233833621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 20:22:02.246490  133413 cni.go:84] Creating CNI manager for ""
	I1205 20:22:02.246505  133413 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 20:22:02.246518  133413 start_flags.go:323] config:
	{Name:running-upgrade-106879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-106879 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.128 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1205 20:22:02.249810  133413 out.go:177] * Starting control plane node running-upgrade-106879 in cluster running-upgrade-106879
	I1205 20:22:02.252540  133413 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 20:22:02.254442  133413 out.go:177] * Pulling base image ...
	I1205 20:22:02.256366  133413 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1205 20:22:02.256509  133413 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1205 20:22:02.283346  133413 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1205 20:22:02.283373  133413 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1205 20:22:02.585089  133413 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 20:22:02.585228  133413 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/running-upgrade-106879/config.json ...
	I1205 20:22:02.585475  133413 cache.go:194] Successfully downloaded all kic artifacts
	I1205 20:22:02.585529  133413 start.go:365] acquiring machines lock for running-upgrade-106879: {Name:mk3992c36a0cfccaf2dd8bd2871d74aebf863b0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:22:02.585586  133413 start.go:369] acquired machines lock for "running-upgrade-106879" in 30.721µs
	I1205 20:22:02.585606  133413 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:22:02.585615  133413 fix.go:54] fixHost starting: 
	I1205 20:22:02.585864  133413 cli_runner.go:164] Run: docker container inspect running-upgrade-106879 --format={{.State.Status}}
	I1205 20:22:02.586106  133413 cache.go:107] acquiring lock: {Name:mk8a4de1334950434f49dfbc7cc0e43bfdbdb2f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:22:02.586171  133413 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 20:22:02.586185  133413 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 81.716µs
	I1205 20:22:02.586197  133413 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 20:22:02.586213  133413 cache.go:107] acquiring lock: {Name:mka6142c8784c3eb00c6bbf3953947386497eac0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:22:02.586245  133413 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1205 20:22:02.586254  133413 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 47.532µs
	I1205 20:22:02.586261  133413 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1205 20:22:02.586278  133413 cache.go:107] acquiring lock: {Name:mk52e13bce5a8ae7e8a902fdeaaeade9860de835 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:22:02.586305  133413 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1205 20:22:02.586315  133413 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 39.213µs
	I1205 20:22:02.586322  133413 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1205 20:22:02.586331  133413 cache.go:107] acquiring lock: {Name:mkb0e889217661a7ef4acaf28adec14f1a06cff8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:22:02.586361  133413 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1205 20:22:02.586370  133413 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 39.943µs
	I1205 20:22:02.586376  133413 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1205 20:22:02.586388  133413 cache.go:107] acquiring lock: {Name:mk0d29774b4c5a0581fe381cfea74b828beff54c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:22:02.586418  133413 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1205 20:22:02.586427  133413 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 42.306µs
	I1205 20:22:02.586433  133413 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1205 20:22:02.586442  133413 cache.go:107] acquiring lock: {Name:mk09e84b1799fc084fd06411f9276e8b348e91e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:22:02.586472  133413 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1205 20:22:02.586481  133413 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 39.894µs
	I1205 20:22:02.586487  133413 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1205 20:22:02.586496  133413 cache.go:107] acquiring lock: {Name:mk546e34e8abee7d5a5f944e7950213bca6c089c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:22:02.586525  133413 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1205 20:22:02.586537  133413 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 39.13µs
	I1205 20:22:02.586544  133413 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1205 20:22:02.586552  133413 cache.go:107] acquiring lock: {Name:mk1fa1653fd02fdb26ff9477a9afc4d03e380d8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:22:02.586580  133413 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1205 20:22:02.586589  133413 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 37.53µs
	I1205 20:22:02.586596  133413 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1205 20:22:02.586601  133413 cache.go:87] Successfully saved all images to host disk.
	I1205 20:22:02.607714  133413 fix.go:102] recreateIfNeeded on running-upgrade-106879: state=Running err=<nil>
	W1205 20:22:02.607740  133413 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:22:02.609927  133413 out.go:177] * Updating the running docker "running-upgrade-106879" container ...
	I1205 20:22:02.611841  133413 machine.go:88] provisioning docker machine ...
	I1205 20:22:02.611870  133413 ubuntu.go:169] provisioning hostname "running-upgrade-106879"
	I1205 20:22:02.611940  133413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-106879
	I1205 20:22:02.631886  133413 main.go:141] libmachine: Using SSH client type: native
	I1205 20:22:02.632348  133413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1205 20:22:02.632368  133413 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-106879 && echo "running-upgrade-106879" | sudo tee /etc/hostname
	I1205 20:22:02.786145  133413 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-106879
	
	I1205 20:22:02.786217  133413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-106879
	I1205 20:22:02.804387  133413 main.go:141] libmachine: Using SSH client type: native
	I1205 20:22:02.804785  133413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1205 20:22:02.804807  133413 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-106879' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-106879/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-106879' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:22:02.949754  133413 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:22:02.949780  133413 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-2478/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-2478/.minikube}
	I1205 20:22:02.949801  133413 ubuntu.go:177] setting up certificates
	I1205 20:22:02.949811  133413 provision.go:83] configureAuth start
	I1205 20:22:02.949870  133413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-106879
	I1205 20:22:02.967966  133413 provision.go:138] copyHostCerts
	I1205 20:22:02.968021  133413 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem, removing ...
	I1205 20:22:02.968049  133413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem
	I1205 20:22:02.968122  133413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem (1078 bytes)
	I1205 20:22:02.968209  133413 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem, removing ...
	I1205 20:22:02.968214  133413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem
	I1205 20:22:02.968239  133413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem (1123 bytes)
	I1205 20:22:02.968288  133413 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem, removing ...
	I1205 20:22:02.968293  133413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem
	I1205 20:22:02.968315  133413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem (1679 bytes)
	I1205 20:22:02.968356  133413 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-106879 san=[192.168.70.128 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-106879]
	I1205 20:22:03.159576  133413 provision.go:172] copyRemoteCerts
	I1205 20:22:03.159718  133413 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:22:03.159799  133413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-106879
	I1205 20:22:03.177626  133413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/running-upgrade-106879/id_rsa Username:docker}
	I1205 20:22:03.278062  133413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:22:03.302042  133413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:22:03.324022  133413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:22:03.358244  133413 provision.go:86] duration metric: configureAuth took 408.419433ms
	I1205 20:22:03.358307  133413 ubuntu.go:193] setting minikube options for container-runtime
	I1205 20:22:03.358487  133413 config.go:182] Loaded profile config "running-upgrade-106879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1205 20:22:03.358594  133413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-106879
	I1205 20:22:03.377895  133413 main.go:141] libmachine: Using SSH client type: native
	I1205 20:22:03.378307  133413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1205 20:22:03.378324  133413 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:22:04.110856  133413 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:22:04.110880  133413 machine.go:91] provisioned docker machine in 1.499025229s
	I1205 20:22:04.110890  133413 start.go:300] post-start starting for "running-upgrade-106879" (driver="docker")
	I1205 20:22:04.110902  133413 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:22:04.110977  133413 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:22:04.111018  133413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-106879
	I1205 20:22:04.133837  133413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/running-upgrade-106879/id_rsa Username:docker}
	I1205 20:22:04.253326  133413 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:22:04.259351  133413 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 20:22:04.259379  133413 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 20:22:04.259391  133413 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 20:22:04.259398  133413 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1205 20:22:04.259408  133413 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/addons for local assets ...
	I1205 20:22:04.259462  133413 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/files for local assets ...
	I1205 20:22:04.259542  133413 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> 77732.pem in /etc/ssl/certs
	I1205 20:22:04.259639  133413 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:22:04.273906  133413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem --> /etc/ssl/certs/77732.pem (1708 bytes)
	I1205 20:22:04.311426  133413 start.go:303] post-start completed in 200.520661ms
	I1205 20:22:04.311504  133413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:22:04.311544  133413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-106879
	I1205 20:22:04.348958  133413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/running-upgrade-106879/id_rsa Username:docker}
	I1205 20:22:04.476902  133413 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 20:22:04.490468  133413 fix.go:56] fixHost completed within 1.904845353s
	I1205 20:22:04.490489  133413 start.go:83] releasing machines lock for "running-upgrade-106879", held for 1.90488893s
	I1205 20:22:04.490567  133413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-106879
	I1205 20:22:04.517057  133413 ssh_runner.go:195] Run: cat /version.json
	I1205 20:22:04.517112  133413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-106879
	I1205 20:22:04.517579  133413 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:22:04.517622  133413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-106879
	I1205 20:22:04.558691  133413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/running-upgrade-106879/id_rsa Username:docker}
	I1205 20:22:04.565502  133413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/running-upgrade-106879/id_rsa Username:docker}
	W1205 20:22:04.700835  133413 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1205 20:22:04.700916  133413 ssh_runner.go:195] Run: systemctl --version
	I1205 20:22:04.786541  133413 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:22:04.902754  133413 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:22:04.908015  133413 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:22:04.931905  133413 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 20:22:04.932003  133413 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:22:04.961290  133413 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:22:04.961313  133413 start.go:475] detecting cgroup driver to use...
	I1205 20:22:04.961422  133413 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 20:22:04.961496  133413 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:22:05.010627  133413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:22:05.023715  133413 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:22:05.024120  133413 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:22:05.041334  133413 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:22:05.053953  133413 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1205 20:22:05.066295  133413 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1205 20:22:05.066406  133413 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:22:05.208326  133413 docker.go:219] disabling docker service ...
	I1205 20:22:05.208431  133413 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:22:05.239884  133413 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:22:05.262379  133413 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:22:05.453753  133413 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:22:05.627796  133413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:22:05.649183  133413 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:22:05.666999  133413 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:22:05.667111  133413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:22:05.683666  133413 out.go:177] 
	W1205 20:22:05.686067  133413 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1205 20:22:05.686209  133413 out.go:239] * 
	* 
	W1205 20:22:05.687239  133413 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:22:05.688742  133413 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-106879 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-05 20:22:05.713531049 +0000 UTC m=+2830.916127051
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-106879
helpers_test.go:235: (dbg) docker inspect running-upgrade-106879:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e617e26edcd66c6b19c6145e7cd70cc74652a573b675dc800618db32c658a514",
	        "Created": "2023-12-05T20:21:16.15950883Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 129765,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-05T20:21:16.559654512Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/e617e26edcd66c6b19c6145e7cd70cc74652a573b675dc800618db32c658a514/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e617e26edcd66c6b19c6145e7cd70cc74652a573b675dc800618db32c658a514/hostname",
	        "HostsPath": "/var/lib/docker/containers/e617e26edcd66c6b19c6145e7cd70cc74652a573b675dc800618db32c658a514/hosts",
	        "LogPath": "/var/lib/docker/containers/e617e26edcd66c6b19c6145e7cd70cc74652a573b675dc800618db32c658a514/e617e26edcd66c6b19c6145e7cd70cc74652a573b675dc800618db32c658a514-json.log",
	        "Name": "/running-upgrade-106879",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-106879:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-106879",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8cec0d210d6a05eab3812eea5e71f7b169a2e6676fd238898f00b447c46ee00-init/diff:/var/lib/docker/overlay2/5697cc58b0d9d67086c83734b7a89a81fd166fd2c3c2202b3435272737bd284a/diff:/var/lib/docker/overlay2/a70da0711e78a89bbe89669ad6c843e0a15048991c7c90f9e74c23e46f2a492f/diff:/var/lib/docker/overlay2/17015f398f53caa92ff87ba2dc0227f43cb13a69713d9f631b77c3c0381c36ad/diff:/var/lib/docker/overlay2/4a6d9603be02053b9af9653a774f18fe5f455001e9a382f33b590ce3bdef65ca/diff:/var/lib/docker/overlay2/94bb8ebcad661e74003e096d230983673894a1cb71be45ca462acb1c2d5c4748/diff:/var/lib/docker/overlay2/133a5f7486e714547d91cdd5a61d6076baace0af467e79805d1fba8933c72bca/diff:/var/lib/docker/overlay2/1fd4b43c54fd3e525663854979d5a15723ad94bf6faac2a65c4d9620b1678864/diff:/var/lib/docker/overlay2/7dac2e1ee6b35c42d77b5b52e2a1dccb924f45db4e0b58bf6ceb0732020b1c09/diff:/var/lib/docker/overlay2/a6becad204a2391c2e4914160c2b05b87fd36150b073cdfe9b7185aa17df6507/diff:/var/lib/docker/overlay2/117b55
83d273fb9c3bee4012db76d48a8d5f1b9a4d648e787927b590921b531d/diff:/var/lib/docker/overlay2/add645ffc53414e9ef9accd635109c02595fd3c0ca11eaefce9bebc01460894a/diff:/var/lib/docker/overlay2/643f88e5d008a1d58bdda9ef29e7eaaaf6938e99ff2f76031a3869a53f433bfa/diff:/var/lib/docker/overlay2/82000016aa5e6d48f2224da1e314e80fe304cb787ad07341e1aa9fb135f6c667/diff:/var/lib/docker/overlay2/b9fbb5b5173a3791099f667ba168f194ad6c7ba0ffea6efd849130c0cca38cdf/diff:/var/lib/docker/overlay2/37aa0a7a15d8c43a916e2d193fbf7484fef43f90c79b4c491dd8fe6eb19b2002/diff:/var/lib/docker/overlay2/5849ff9a836d521ec4f4ad702e620fc369df37d40d9909202f5983dd9f6cef00/diff:/var/lib/docker/overlay2/37400a849bbb50bbfc27ced40792ff289c98424cfeef53909878c57632544383/diff:/var/lib/docker/overlay2/93f352786dd003efac2e60d6da6f728ced09f38248067c628157a1d60c4e6d1f/diff:/var/lib/docker/overlay2/1f0ef63305fb8a11a44b605edf9e0fb9fc6d45b838c9103b5aaa18ea9d98158b/diff:/var/lib/docker/overlay2/e7fcef6add1fcc984d3a362ae15c367bba9676436fcdca55f7ad6e2eceab430e/diff:/var/lib/d
ocker/overlay2/d9de153113811b3df8d4baed3dd353a7c4b2c9bee35fa78eed53ff8ab7f1ce34/diff:/var/lib/docker/overlay2/86524ea93e9c9d991112eaf21837879652072f18e87a92455341b0ec29881813/diff:/var/lib/docker/overlay2/b22426385763418f65400a3d73e6c6911c3cd8cfec960a6a7a1bb0bda758ec0f/diff:/var/lib/docker/overlay2/8937f8c4d2e66e95c764f2343a427e554bc55edfeb88d222adfa7e6e0212fe20/diff:/var/lib/docker/overlay2/cdd6f0db8cc3c4204e0609b9e03f9b1570ca287816880fc4b076a18907a85545/diff:/var/lib/docker/overlay2/e2c94e205319cb64d8d70f9fac5f29dfe59443c395d5d1789658955dae9773dd/diff:/var/lib/docker/overlay2/9879d13d237b38d39eecb617e13443052223c204adbab0536b1e766a7530ddaf/diff:/var/lib/docker/overlay2/1819f58e7c3012d77d4db23a2e54d242fd11683241fd089717518d69ed060db4/diff:/var/lib/docker/overlay2/6cbae35a5b69c53fbeb8b40d3123340226003cd0681529a39428028c2e29e72a/diff:/var/lib/docker/overlay2/5317e9ab8a1225437112d0b6c87696c8d390b6af5b9cfa7d48a1f3deae7bd42d/diff:/var/lib/docker/overlay2/d211ca8a6f649bcb73b14e2c8166e0654a2fc9fe6d4c64fe1793ce498ad
50913/diff:/var/lib/docker/overlay2/34dc8334ce2f8fa75884a8395d4a4df8eb8c129ee26d07b0a2140dadfa04da6d/diff:/var/lib/docker/overlay2/7f6b4d183023134f547585810561212cec3292f070fb73c03894240c71a845f0/diff:/var/lib/docker/overlay2/be3b55969684fd305541c4486ba989a913094116c9a3dd8d0ba0b1efdedc05cd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8cec0d210d6a05eab3812eea5e71f7b169a2e6676fd238898f00b447c46ee00/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8cec0d210d6a05eab3812eea5e71f7b169a2e6676fd238898f00b447c46ee00/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8cec0d210d6a05eab3812eea5e71f7b169a2e6676fd238898f00b447c46ee00/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-106879",
	                "Source": "/var/lib/docker/volumes/running-upgrade-106879/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-106879",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-106879",
	                "name.minikube.sigs.k8s.io": "running-upgrade-106879",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f30043f679d220d014ecc9be19637d42b5ac8bd83381766247fd70b8e0e5ecda",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32957"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32956"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32955"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f30043f679d2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-106879": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.128"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e617e26edcd6",
	                        "running-upgrade-106879"
	                    ],
	                    "NetworkID": "ade181cb16eefbd4ff69f7054371f7935fe75578ca7f0b1005224e703afb1bd7",
	                    "EndpointID": "b6152d9a797f42452a842556ee126afd96a33a892c360b54a085c8461f10e973",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.128",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:80",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-106879 -n running-upgrade-106879
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-106879 -n running-upgrade-106879: exit status 4 (542.784401ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:22:06.209710  134156 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-106879" does not appear in /home/jenkins/minikube-integration/17731-2478/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-106879" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-106879" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-106879
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-106879: (2.858647914s)
--- FAIL: TestRunningBinaryUpgrade (73.34s)

                                                
                                    
x
+
TestMissingContainerUpgrade (185.25s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.3436229259.exe start -p missing-upgrade-099237 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.3436229259.exe start -p missing-upgrade-099237 --memory=2200 --driver=docker  --container-runtime=crio: (2m17.096744878s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-099237
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-099237: (10.336273828s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-099237
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-099237 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1205 20:19:14.991669    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-099237 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (34.096957475s)

                                                
                                                
-- stdout --
	* [missing-upgrade-099237] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-099237 in cluster missing-upgrade-099237
	* Pulling base image ...
	* docker "missing-upgrade-099237" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:18:55.777324  121035 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:18:55.777555  121035 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:18:55.777582  121035 out.go:309] Setting ErrFile to fd 2...
	I1205 20:18:55.777601  121035 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:18:55.777882  121035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	I1205 20:18:55.778302  121035 out.go:303] Setting JSON to false
	I1205 20:18:55.779394  121035 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3682,"bootTime":1701803854,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 20:18:55.779493  121035 start.go:138] virtualization:  
	I1205 20:18:55.783616  121035 out.go:177] * [missing-upgrade-099237] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1205 20:18:55.786057  121035 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:18:55.786153  121035 notify.go:220] Checking for updates...
	I1205 20:18:55.790306  121035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:18:55.792333  121035 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 20:18:55.794218  121035 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 20:18:55.796066  121035 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 20:18:55.798580  121035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:18:55.801463  121035 config.go:182] Loaded profile config "missing-upgrade-099237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1205 20:18:55.804053  121035 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1205 20:18:55.806104  121035 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:18:55.830831  121035 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 20:18:55.830942  121035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:18:55.929900  121035 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-05 20:18:55.920232902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 20:18:55.930001  121035 docker.go:295] overlay module found
	I1205 20:18:55.932418  121035 out.go:177] * Using the docker driver based on existing profile
	I1205 20:18:55.934547  121035 start.go:298] selected driver: docker
	I1205 20:18:55.934564  121035 start.go:902] validating driver "docker" against &{Name:missing-upgrade-099237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-099237 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.166 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1205 20:18:55.934651  121035 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:18:55.935244  121035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:18:55.998572  121035 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-05 20:18:55.989873656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 20:18:55.998921  121035 cni.go:84] Creating CNI manager for ""
	I1205 20:18:55.998939  121035 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 20:18:55.998953  121035 start_flags.go:323] config:
	{Name:missing-upgrade-099237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-099237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.166 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1205 20:18:56.001530  121035 out.go:177] * Starting control plane node missing-upgrade-099237 in cluster missing-upgrade-099237
	I1205 20:18:56.004170  121035 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 20:18:56.006549  121035 out.go:177] * Pulling base image ...
	I1205 20:18:56.008926  121035 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1205 20:18:56.009026  121035 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1205 20:18:56.026823  121035 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1205 20:18:56.027423  121035 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1205 20:18:56.027466  121035 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1205 20:18:56.079536  121035 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 20:18:56.079716  121035 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/missing-upgrade-099237/config.json ...
	I1205 20:18:56.079815  121035 cache.go:107] acquiring lock: {Name:mk8a4de1334950434f49dfbc7cc0e43bfdbdb2f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:18:56.079896  121035 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 20:18:56.079906  121035 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 97.223µs
	I1205 20:18:56.079998  121035 cache.go:107] acquiring lock: {Name:mk0d29774b4c5a0581fe381cfea74b828beff54c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:18:56.080105  121035 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1205 20:18:56.080308  121035 cache.go:107] acquiring lock: {Name:mka6142c8784c3eb00c6bbf3953947386497eac0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:18:56.080403  121035 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1205 20:18:56.080513  121035 cache.go:107] acquiring lock: {Name:mk52e13bce5a8ae7e8a902fdeaaeade9860de835 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:18:56.080620  121035 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1205 20:18:56.080674  121035 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 20:18:56.080736  121035 cache.go:107] acquiring lock: {Name:mkb0e889217661a7ef4acaf28adec14f1a06cff8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:18:56.080802  121035 cache.go:107] acquiring lock: {Name:mk09e84b1799fc084fd06411f9276e8b348e91e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:18:56.080825  121035 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1205 20:18:56.081070  121035 cache.go:107] acquiring lock: {Name:mk546e34e8abee7d5a5f944e7950213bca6c089c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:18:56.081164  121035 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:18:56.081746  121035 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1205 20:18:56.081999  121035 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1205 20:18:56.082136  121035 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:18:56.082298  121035 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1205 20:18:56.082463  121035 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1205 20:18:56.082475  121035 cache.go:107] acquiring lock: {Name:mk1fa1653fd02fdb26ff9477a9afc4d03e380d8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:18:56.082573  121035 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 20:18:56.082628  121035 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1205 20:18:56.083475  121035 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 20:18:56.083526  121035 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 20:18:56.401388  121035 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I1205 20:18:56.417988  121035 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I1205 20:18:56.433745  121035 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1205 20:18:56.448414  121035 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	W1205 20:18:56.454548  121035 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1205 20:18:56.454621  121035 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	W1205 20:18:56.456865  121035 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1205 20:18:56.456903  121035 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	W1205 20:18:56.462944  121035 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1205 20:18:56.463011  121035 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I1205 20:18:56.570126  121035 cache.go:157] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1205 20:18:56.570193  121035 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 489.394303ms
	I1205 20:18:56.570217  121035 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  945.28 KiB / 287.99 MiB [] 0.32% ? p/s ?I1205 20:18:56.899690  121035 cache.go:157] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1205 20:18:56.899717  121035 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 817.246191ms
	I1205 20:18:56.899730  121035 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1205 20:18:57.078411  121035 cache.go:157] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1205 20:18:57.078438  121035 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 997.704069ms
	I1205 20:18:57.078463  121035 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  15.17 MiB / 287.99 MiB [>] 5.27% ? p/s ?I1205 20:18:57.206719  121035 cache.go:157] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1205 20:18:57.206766  121035 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.126460237s
	I1205 20:18:57.206790  121035 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.17 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.17 MiB I1205 20:18:57.531943  121035 cache.go:157] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1205 20:18:57.531969  121035 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.451457388s
	I1205 20:18:57.531982  121035 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.17 MiB     > gcr.io/k8s-minikube/kicbase...:  26.09 MiB / 287.99 MiB  9.06% 40.40 MiB     > gcr.io/k8s-minikube/kicbase...:  38.53 MiB / 287.99 MiB  13.38% 40.40 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 40.40 MiBI1205 20:18:58.313585  121035 cache.go:157] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1205 20:18:58.313626  121035 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.233646871s
	I1205 20:18:58.313639  121035 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  56.03 MiB / 287.99 MiB  19.45% 41.02 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 41.02 MiB    > gcr.io/k8s-minikube/kicbase...:  72.42 MiB / 287.99 MiB  25.15% 41.02 MiB    > gcr.io/k8s-minikube/kicbase...:  85.33 MiB / 287.99 MiB  29.63% 41.52 MiB    > gcr.io/k8s-minikube/kicbase...:  99.79 MiB / 287.99 MiB  34.65% 41.52 MiB    > gcr.io/k8s-minikube/kicbase...:  116.31 MiB / 287.99 MiB  40.39% 41.52 Mi    > gcr.io/k8s-minikube/kicbase...:  131.32 MiB / 287.99 MiB  45.60% 43.79 Mi    > gcr.io/k8s-minikube/kicbase...:  143.83 MiB / 287.99 MiB  49.94% 43.79 MiI1205 20:18:59.989348  121035 cache.go:157] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1205 20:18:59.990093  121035 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 3.909035874s
	I1205 20:18:59.990166  121035 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1205 20:18:59.990203  121035 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  163.79 MiB / 287.99 MiB  56.87% 43.79 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 45.30 Mi    > gcr.io/k8s-minikube/kicbase...:  174.79 MiB / 287.99 MiB  60.69% 45.30 Mi    > gcr.io/k8s-minikube/kicbase...:  191.99 MiB / 287.99 MiB  66.67% 45.30 Mi    > gcr.io/k8s-minikube/kicbase...:  208.76 MiB / 287.99 MiB  72.49% 46.36 Mi    > gcr.io/k8s-minikube/kicbase...:  211.21 MiB / 287.99 MiB  73.34% 46.36 Mi    > gcr.io/k8s-minikube/kicbase...:  228.93 MiB / 287.99 MiB  79.49% 46.36 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 46.52 Mi    > gcr.io/k8s-minikube/kicbase...:  244.73 MiB / 287.99 MiB  84.98% 46.52 Mi    > gcr.io/k8s-minikube/kicbase...:  261.43 MiB / 287.99 MiB  90.78% 46.52 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 46.42 Mi    > gcr.io/k8s-minikube/kicbase...:  274.05 MiB / 287.99 MiB  95.16% 46.42 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.
99% 46.42 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 45.89 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 45.89 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 45.89 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 42.93 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 42.93 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 42.93 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 40.16 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 40.16 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 40.16 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 37.57 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 36.77 MI1205 20:19:04.512109  121035 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b
282adea674ee67882f59f4f546e as a tarball
	I1205 20:19:04.512122  121035 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1205 20:19:05.500978  121035 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1205 20:19:05.501020  121035 cache.go:194] Successfully downloaded all kic artifacts
	I1205 20:19:05.501065  121035 start.go:365] acquiring machines lock for missing-upgrade-099237: {Name:mk1169581640dc2da169190dbe66fab5b7d63de0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:19:05.501136  121035 start.go:369] acquired machines lock for "missing-upgrade-099237" in 52.85µs
	I1205 20:19:05.501157  121035 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:19:05.501165  121035 fix.go:54] fixHost starting: 
	I1205 20:19:05.501446  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:05.525138  121035 cli_runner.go:211] docker container inspect missing-upgrade-099237 --format={{.State.Status}} returned with exit code 1
	I1205 20:19:05.525224  121035 fix.go:102] recreateIfNeeded on missing-upgrade-099237: state= err=unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:05.525242  121035 fix.go:107] machineExists: false. err=machine does not exist
	I1205 20:19:05.539077  121035 out.go:177] * docker "missing-upgrade-099237" container is missing, will recreate.
	I1205 20:19:05.550685  121035 delete.go:124] DEMOLISHING missing-upgrade-099237 ...
	I1205 20:19:05.550790  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:05.576142  121035 cli_runner.go:211] docker container inspect missing-upgrade-099237 --format={{.State.Status}} returned with exit code 1
	W1205 20:19:05.576308  121035 stop.go:75] unable to get state: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:05.576344  121035 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:05.577007  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:05.599993  121035 cli_runner.go:211] docker container inspect missing-upgrade-099237 --format={{.State.Status}} returned with exit code 1
	I1205 20:19:05.600050  121035 delete.go:82] Unable to get host status for missing-upgrade-099237, assuming it has already been deleted: state: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:05.600112  121035 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-099237
	W1205 20:19:05.627910  121035 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-099237 returned with exit code 1
	I1205 20:19:05.627939  121035 kic.go:371] could not find the container missing-upgrade-099237 to remove it. will try anyways
	I1205 20:19:05.627987  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:05.651593  121035 cli_runner.go:211] docker container inspect missing-upgrade-099237 --format={{.State.Status}} returned with exit code 1
	W1205 20:19:05.651653  121035 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:05.651713  121035 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-099237 /bin/bash -c "sudo init 0"
	W1205 20:19:05.676143  121035 cli_runner.go:211] docker exec --privileged -t missing-upgrade-099237 /bin/bash -c "sudo init 0" returned with exit code 1
	I1205 20:19:05.676170  121035 oci.go:650] error shutdown missing-upgrade-099237: docker exec --privileged -t missing-upgrade-099237 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:06.676399  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:06.697175  121035 cli_runner.go:211] docker container inspect missing-upgrade-099237 --format={{.State.Status}} returned with exit code 1
	I1205 20:19:06.697246  121035 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:06.697260  121035 oci.go:664] temporary error: container missing-upgrade-099237 status is  but expect it to be exited
	I1205 20:19:06.697287  121035 retry.go:31] will retry after 521.508088ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:07.218974  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:07.240922  121035 cli_runner.go:211] docker container inspect missing-upgrade-099237 --format={{.State.Status}} returned with exit code 1
	I1205 20:19:07.240975  121035 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:07.240984  121035 oci.go:664] temporary error: container missing-upgrade-099237 status is  but expect it to be exited
	I1205 20:19:07.241008  121035 retry.go:31] will retry after 1.071445509s: couldn't verify container is exited. %v: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:08.313132  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:08.342017  121035 cli_runner.go:211] docker container inspect missing-upgrade-099237 --format={{.State.Status}} returned with exit code 1
	I1205 20:19:08.342090  121035 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:08.342103  121035 oci.go:664] temporary error: container missing-upgrade-099237 status is  but expect it to be exited
	I1205 20:19:08.342128  121035 retry.go:31] will retry after 941.810504ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:09.284994  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:09.308497  121035 cli_runner.go:211] docker container inspect missing-upgrade-099237 --format={{.State.Status}} returned with exit code 1
	I1205 20:19:09.308549  121035 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:09.308562  121035 oci.go:664] temporary error: container missing-upgrade-099237 status is  but expect it to be exited
	I1205 20:19:09.308591  121035 retry.go:31] will retry after 864.920039ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:10.173640  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:10.191090  121035 cli_runner.go:211] docker container inspect missing-upgrade-099237 --format={{.State.Status}} returned with exit code 1
	I1205 20:19:10.191142  121035 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:10.191151  121035 oci.go:664] temporary error: container missing-upgrade-099237 status is  but expect it to be exited
	I1205 20:19:10.191175  121035 retry.go:31] will retry after 3.182311597s: couldn't verify container is exited. %v: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:13.374991  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:13.418448  121035 cli_runner.go:211] docker container inspect missing-upgrade-099237 --format={{.State.Status}} returned with exit code 1
	I1205 20:19:13.418500  121035 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:13.418508  121035 oci.go:664] temporary error: container missing-upgrade-099237 status is  but expect it to be exited
	I1205 20:19:13.418538  121035 retry.go:31] will retry after 3.416462923s: couldn't verify container is exited. %v: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:16.835893  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:16.852014  121035 cli_runner.go:211] docker container inspect missing-upgrade-099237 --format={{.State.Status}} returned with exit code 1
	I1205 20:19:16.852074  121035 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:16.852094  121035 oci.go:664] temporary error: container missing-upgrade-099237 status is  but expect it to be exited
	I1205 20:19:16.852120  121035 retry.go:31] will retry after 5.833071219s: couldn't verify container is exited. %v: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:22.686137  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:22.701772  121035 cli_runner.go:211] docker container inspect missing-upgrade-099237 --format={{.State.Status}} returned with exit code 1
	I1205 20:19:22.701826  121035 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	I1205 20:19:22.701838  121035 oci.go:664] temporary error: container missing-upgrade-099237 status is  but expect it to be exited
	I1205 20:19:22.701868  121035 oci.go:88] couldn't shut down missing-upgrade-099237 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-099237": docker container inspect missing-upgrade-099237 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099237
	 
	I1205 20:19:22.701917  121035 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-099237
	I1205 20:19:22.717731  121035 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-099237
	W1205 20:19:22.732806  121035 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-099237 returned with exit code 1
	I1205 20:19:22.732886  121035 cli_runner.go:164] Run: docker network inspect missing-upgrade-099237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 20:19:22.748921  121035 cli_runner.go:164] Run: docker network rm missing-upgrade-099237
	I1205 20:19:22.850354  121035 fix.go:114] Sleeping 1 second for extra luck!
	I1205 20:19:23.850505  121035 start.go:125] createHost starting for "" (driver="docker")
	I1205 20:19:23.852941  121035 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1205 20:19:23.853079  121035 start.go:159] libmachine.API.Create for "missing-upgrade-099237" (driver="docker")
	I1205 20:19:23.853102  121035 client.go:168] LocalClient.Create starting
	I1205 20:19:23.853161  121035 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem
	I1205 20:19:23.853201  121035 main.go:141] libmachine: Decoding PEM data...
	I1205 20:19:23.853219  121035 main.go:141] libmachine: Parsing certificate...
	I1205 20:19:23.853276  121035 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem
	I1205 20:19:23.853294  121035 main.go:141] libmachine: Decoding PEM data...
	I1205 20:19:23.853304  121035 main.go:141] libmachine: Parsing certificate...
	I1205 20:19:23.853537  121035 cli_runner.go:164] Run: docker network inspect missing-upgrade-099237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1205 20:19:23.871088  121035 cli_runner.go:211] docker network inspect missing-upgrade-099237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1205 20:19:23.871172  121035 network_create.go:281] running [docker network inspect missing-upgrade-099237] to gather additional debugging logs...
	I1205 20:19:23.871192  121035 cli_runner.go:164] Run: docker network inspect missing-upgrade-099237
	W1205 20:19:23.888336  121035 cli_runner.go:211] docker network inspect missing-upgrade-099237 returned with exit code 1
	I1205 20:19:23.888376  121035 network_create.go:284] error running [docker network inspect missing-upgrade-099237]: docker network inspect missing-upgrade-099237: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-099237 not found
	I1205 20:19:23.888389  121035 network_create.go:286] output of [docker network inspect missing-upgrade-099237]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-099237 not found
	
	** /stderr **
	I1205 20:19:23.888507  121035 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 20:19:23.906767  121035 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b6ed01875673 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:6c:57:c2:6c} reservation:<nil>}
	I1205 20:19:23.907572  121035 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f407d22902b5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ed:fd:48:fd} reservation:<nil>}
	I1205 20:19:23.908037  121035 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a4ed55737c00 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ae:52:fa:a5} reservation:<nil>}
	I1205 20:19:23.909034  121035 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4003756340}
	I1205 20:19:23.909063  121035 network_create.go:124] attempt to create docker network missing-upgrade-099237 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1205 20:19:23.909128  121035 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-099237 missing-upgrade-099237
	I1205 20:19:23.979386  121035 network_create.go:108] docker network missing-upgrade-099237 192.168.76.0/24 created
	I1205 20:19:23.979419  121035 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-099237" container
	I1205 20:19:23.979495  121035 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1205 20:19:23.996515  121035 cli_runner.go:164] Run: docker volume create missing-upgrade-099237 --label name.minikube.sigs.k8s.io=missing-upgrade-099237 --label created_by.minikube.sigs.k8s.io=true
	I1205 20:19:24.014690  121035 oci.go:103] Successfully created a docker volume missing-upgrade-099237
	I1205 20:19:24.014773  121035 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-099237-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-099237 --entrypoint /usr/bin/test -v missing-upgrade-099237:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1205 20:19:24.545356  121035 oci.go:107] Successfully prepared a docker volume missing-upgrade-099237
	I1205 20:19:24.545398  121035 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1205 20:19:24.545546  121035 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1205 20:19:24.545662  121035 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1205 20:19:24.613931  121035 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-099237 --name missing-upgrade-099237 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-099237 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-099237 --network missing-upgrade-099237 --ip 192.168.76.2 --volume missing-upgrade-099237:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1205 20:19:24.988711  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Running}}
	I1205 20:19:25.016171  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	I1205 20:19:25.038673  121035 cli_runner.go:164] Run: docker exec missing-upgrade-099237 stat /var/lib/dpkg/alternatives/iptables
	I1205 20:19:25.110824  121035 oci.go:144] the created container "missing-upgrade-099237" has a running status.
	I1205 20:19:25.110848  121035 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/missing-upgrade-099237/id_rsa...
	I1205 20:19:25.601899  121035 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17731-2478/.minikube/machines/missing-upgrade-099237/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1205 20:19:25.625153  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	I1205 20:19:25.645687  121035 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1205 20:19:25.645706  121035 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-099237 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1205 20:19:25.726130  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	I1205 20:19:25.751389  121035 machine.go:88] provisioning docker machine ...
	I1205 20:19:25.751418  121035 ubuntu.go:169] provisioning hostname "missing-upgrade-099237"
	I1205 20:19:25.751492  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:25.774847  121035 main.go:141] libmachine: Using SSH client type: native
	I1205 20:19:25.775294  121035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I1205 20:19:25.775309  121035 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-099237 && echo "missing-upgrade-099237" | sudo tee /etc/hostname
	I1205 20:19:25.933072  121035 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-099237
	
	I1205 20:19:25.933153  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:25.953477  121035 main.go:141] libmachine: Using SSH client type: native
	I1205 20:19:25.953881  121035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I1205 20:19:25.953899  121035 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-099237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-099237/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-099237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:19:26.096738  121035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:19:26.096763  121035 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-2478/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-2478/.minikube}
	I1205 20:19:26.096781  121035 ubuntu.go:177] setting up certificates
	I1205 20:19:26.096793  121035 provision.go:83] configureAuth start
	I1205 20:19:26.096851  121035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-099237
	I1205 20:19:26.119485  121035 provision.go:138] copyHostCerts
	I1205 20:19:26.119543  121035 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem, removing ...
	I1205 20:19:26.119554  121035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem
	I1205 20:19:26.119624  121035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem (1078 bytes)
	I1205 20:19:26.119715  121035 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem, removing ...
	I1205 20:19:26.119720  121035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem
	I1205 20:19:26.119746  121035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem (1123 bytes)
	I1205 20:19:26.119845  121035 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem, removing ...
	I1205 20:19:26.119850  121035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem
	I1205 20:19:26.119876  121035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem (1679 bytes)
	I1205 20:19:26.119920  121035 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-099237 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-099237]
	I1205 20:19:26.545209  121035 provision.go:172] copyRemoteCerts
	I1205 20:19:26.545298  121035 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:19:26.545344  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:26.562535  121035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/missing-upgrade-099237/id_rsa Username:docker}
	I1205 20:19:26.660265  121035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:19:26.681743  121035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:19:26.702906  121035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:19:26.723241  121035 provision.go:86] duration metric: configureAuth took 626.433937ms
	I1205 20:19:26.723304  121035 ubuntu.go:193] setting minikube options for container-runtime
	I1205 20:19:26.723486  121035 config.go:182] Loaded profile config "missing-upgrade-099237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1205 20:19:26.723595  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:26.741050  121035 main.go:141] libmachine: Using SSH client type: native
	I1205 20:19:26.741462  121035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I1205 20:19:26.741487  121035 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:19:27.154766  121035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:19:27.154793  121035 machine.go:91] provisioned docker machine in 1.403385711s
	I1205 20:19:27.154803  121035 client.go:171] LocalClient.Create took 3.301694704s
	I1205 20:19:27.154815  121035 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-099237" took 3.301735886s
	I1205 20:19:27.154823  121035 start.go:300] post-start starting for "missing-upgrade-099237" (driver="docker")
	I1205 20:19:27.154832  121035 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:19:27.154901  121035 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:19:27.154948  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:27.175571  121035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/missing-upgrade-099237/id_rsa Username:docker}
	I1205 20:19:27.272338  121035 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:19:27.275896  121035 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 20:19:27.275921  121035 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 20:19:27.275933  121035 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 20:19:27.275941  121035 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1205 20:19:27.275950  121035 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/addons for local assets ...
	I1205 20:19:27.276017  121035 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/files for local assets ...
	I1205 20:19:27.276099  121035 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> 77732.pem in /etc/ssl/certs
	I1205 20:19:27.276196  121035 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:19:27.283973  121035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem --> /etc/ssl/certs/77732.pem (1708 bytes)
	I1205 20:19:27.303932  121035 start.go:303] post-start completed in 149.094842ms
	I1205 20:19:27.304262  121035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-099237
	I1205 20:19:27.321325  121035 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/missing-upgrade-099237/config.json ...
	I1205 20:19:27.321589  121035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:19:27.321631  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:27.347011  121035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/missing-upgrade-099237/id_rsa Username:docker}
	I1205 20:19:27.442681  121035 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 20:19:27.448028  121035 start.go:128] duration metric: createHost completed in 3.597490939s
	I1205 20:19:27.448121  121035 cli_runner.go:164] Run: docker container inspect missing-upgrade-099237 --format={{.State.Status}}
	W1205 20:19:27.465270  121035 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:19:27.465297  121035 machine.go:88] provisioning docker machine ...
	I1205 20:19:27.465314  121035 ubuntu.go:169] provisioning hostname "missing-upgrade-099237"
	I1205 20:19:27.465371  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:27.482051  121035 main.go:141] libmachine: Using SSH client type: native
	I1205 20:19:27.482469  121035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I1205 20:19:27.482488  121035 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-099237 && echo "missing-upgrade-099237" | sudo tee /etc/hostname
	I1205 20:19:27.630086  121035 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-099237
	
	I1205 20:19:27.630183  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:27.648282  121035 main.go:141] libmachine: Using SSH client type: native
	I1205 20:19:27.648682  121035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I1205 20:19:27.648705  121035 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-099237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-099237/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-099237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:19:27.788408  121035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:19:27.788430  121035 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-2478/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-2478/.minikube}
	I1205 20:19:27.788454  121035 ubuntu.go:177] setting up certificates
	I1205 20:19:27.788464  121035 provision.go:83] configureAuth start
	I1205 20:19:27.788527  121035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-099237
	I1205 20:19:27.806397  121035 provision.go:138] copyHostCerts
	I1205 20:19:27.806460  121035 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem, removing ...
	I1205 20:19:27.806468  121035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem
	I1205 20:19:27.806541  121035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem (1078 bytes)
	I1205 20:19:27.806629  121035 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem, removing ...
	I1205 20:19:27.806633  121035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem
	I1205 20:19:27.806677  121035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem (1123 bytes)
	I1205 20:19:27.806828  121035 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem, removing ...
	I1205 20:19:27.806835  121035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem
	I1205 20:19:27.806865  121035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem (1679 bytes)
	I1205 20:19:27.806917  121035 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-099237 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-099237]
	I1205 20:19:28.090468  121035 provision.go:172] copyRemoteCerts
	I1205 20:19:28.090530  121035 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:19:28.090576  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:28.108715  121035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/missing-upgrade-099237/id_rsa Username:docker}
	I1205 20:19:28.208408  121035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:19:28.230006  121035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:19:28.250937  121035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:19:28.275342  121035 provision.go:86] duration metric: configureAuth took 486.851593ms
	I1205 20:19:28.275369  121035 ubuntu.go:193] setting minikube options for container-runtime
	I1205 20:19:28.275556  121035 config.go:182] Loaded profile config "missing-upgrade-099237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1205 20:19:28.275680  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:28.293671  121035 main.go:141] libmachine: Using SSH client type: native
	I1205 20:19:28.294064  121035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I1205 20:19:28.294089  121035 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:19:28.608164  121035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:19:28.608186  121035 machine.go:91] provisioned docker machine in 1.142880805s
	I1205 20:19:28.608197  121035 start.go:300] post-start starting for "missing-upgrade-099237" (driver="docker")
	I1205 20:19:28.608207  121035 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:19:28.608290  121035 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:19:28.608351  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:28.627118  121035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/missing-upgrade-099237/id_rsa Username:docker}
	I1205 20:19:28.724338  121035 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:19:28.728082  121035 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 20:19:28.728106  121035 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 20:19:28.728118  121035 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 20:19:28.728126  121035 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1205 20:19:28.728135  121035 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/addons for local assets ...
	I1205 20:19:28.728201  121035 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/files for local assets ...
	I1205 20:19:28.728280  121035 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> 77732.pem in /etc/ssl/certs
	I1205 20:19:28.728386  121035 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:19:28.736697  121035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem --> /etc/ssl/certs/77732.pem (1708 bytes)
	I1205 20:19:28.757422  121035 start.go:303] post-start completed in 149.210487ms
	I1205 20:19:28.757492  121035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:19:28.757537  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:28.776052  121035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/missing-upgrade-099237/id_rsa Username:docker}
	I1205 20:19:28.873463  121035 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 20:19:28.878698  121035 fix.go:56] fixHost completed within 23.377528886s
	I1205 20:19:28.878719  121035 start.go:83] releasing machines lock for "missing-upgrade-099237", held for 23.377574671s
	I1205 20:19:28.878784  121035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-099237
	I1205 20:19:28.897424  121035 ssh_runner.go:195] Run: cat /version.json
	I1205 20:19:28.897457  121035 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:19:28.897480  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:28.897513  121035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099237
	I1205 20:19:28.916812  121035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/missing-upgrade-099237/id_rsa Username:docker}
	I1205 20:19:28.923097  121035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/missing-upgrade-099237/id_rsa Username:docker}
	W1205 20:19:29.162249  121035 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1205 20:19:29.162328  121035 ssh_runner.go:195] Run: systemctl --version
	I1205 20:19:29.167588  121035 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:19:29.278210  121035 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:19:29.283950  121035 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:19:29.306248  121035 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 20:19:29.306329  121035 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:19:29.341776  121035 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:19:29.341798  121035 start.go:475] detecting cgroup driver to use...
	I1205 20:19:29.341828  121035 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 20:19:29.341875  121035 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:19:29.371791  121035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:19:29.383658  121035 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:19:29.383720  121035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:19:29.396344  121035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:19:29.408548  121035 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1205 20:19:29.421547  121035 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1205 20:19:29.421625  121035 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:19:29.524594  121035 docker.go:219] disabling docker service ...
	I1205 20:19:29.524658  121035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:19:29.537049  121035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:19:29.548715  121035 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:19:29.647245  121035 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:19:29.756769  121035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:19:29.768161  121035 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:19:29.784365  121035 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:19:29.784466  121035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:19:29.797520  121035 out.go:177] 
	W1205 20:19:29.799816  121035 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1205 20:19:29.799834  121035 out.go:239] * 
	* 
	W1205 20:19:29.800916  121035 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:19:29.803192  121035 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-099237 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-12-05 20:19:29.851614522 +0000 UTC m=+2675.054210533
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-099237
helpers_test.go:235: (dbg) docker inspect missing-upgrade-099237:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40dfabe9a5ea610797c16713939424cca9f1953c858824c9868d90f8bb6d3e7c",
	        "Created": "2023-12-05T20:19:24.629655005Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 122293,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-05T20:19:24.979431295Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/40dfabe9a5ea610797c16713939424cca9f1953c858824c9868d90f8bb6d3e7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40dfabe9a5ea610797c16713939424cca9f1953c858824c9868d90f8bb6d3e7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/40dfabe9a5ea610797c16713939424cca9f1953c858824c9868d90f8bb6d3e7c/hosts",
	        "LogPath": "/var/lib/docker/containers/40dfabe9a5ea610797c16713939424cca9f1953c858824c9868d90f8bb6d3e7c/40dfabe9a5ea610797c16713939424cca9f1953c858824c9868d90f8bb6d3e7c-json.log",
	        "Name": "/missing-upgrade-099237",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "missing-upgrade-099237:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-099237",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a78b919176f9206dead87df90b8015669d118eee5dea0dea19feb49ad861b94f-init/diff:/var/lib/docker/overlay2/5697cc58b0d9d67086c83734b7a89a81fd166fd2c3c2202b3435272737bd284a/diff:/var/lib/docker/overlay2/a70da0711e78a89bbe89669ad6c843e0a15048991c7c90f9e74c23e46f2a492f/diff:/var/lib/docker/overlay2/17015f398f53caa92ff87ba2dc0227f43cb13a69713d9f631b77c3c0381c36ad/diff:/var/lib/docker/overlay2/4a6d9603be02053b9af9653a774f18fe5f455001e9a382f33b590ce3bdef65ca/diff:/var/lib/docker/overlay2/94bb8ebcad661e74003e096d230983673894a1cb71be45ca462acb1c2d5c4748/diff:/var/lib/docker/overlay2/133a5f7486e714547d91cdd5a61d6076baace0af467e79805d1fba8933c72bca/diff:/var/lib/docker/overlay2/1fd4b43c54fd3e525663854979d5a15723ad94bf6faac2a65c4d9620b1678864/diff:/var/lib/docker/overlay2/7dac2e1ee6b35c42d77b5b52e2a1dccb924f45db4e0b58bf6ceb0732020b1c09/diff:/var/lib/docker/overlay2/a6becad204a2391c2e4914160c2b05b87fd36150b073cdfe9b7185aa17df6507/diff:/var/lib/docker/overlay2/117b55
83d273fb9c3bee4012db76d48a8d5f1b9a4d648e787927b590921b531d/diff:/var/lib/docker/overlay2/add645ffc53414e9ef9accd635109c02595fd3c0ca11eaefce9bebc01460894a/diff:/var/lib/docker/overlay2/643f88e5d008a1d58bdda9ef29e7eaaaf6938e99ff2f76031a3869a53f433bfa/diff:/var/lib/docker/overlay2/82000016aa5e6d48f2224da1e314e80fe304cb787ad07341e1aa9fb135f6c667/diff:/var/lib/docker/overlay2/b9fbb5b5173a3791099f667ba168f194ad6c7ba0ffea6efd849130c0cca38cdf/diff:/var/lib/docker/overlay2/37aa0a7a15d8c43a916e2d193fbf7484fef43f90c79b4c491dd8fe6eb19b2002/diff:/var/lib/docker/overlay2/5849ff9a836d521ec4f4ad702e620fc369df37d40d9909202f5983dd9f6cef00/diff:/var/lib/docker/overlay2/37400a849bbb50bbfc27ced40792ff289c98424cfeef53909878c57632544383/diff:/var/lib/docker/overlay2/93f352786dd003efac2e60d6da6f728ced09f38248067c628157a1d60c4e6d1f/diff:/var/lib/docker/overlay2/1f0ef63305fb8a11a44b605edf9e0fb9fc6d45b838c9103b5aaa18ea9d98158b/diff:/var/lib/docker/overlay2/e7fcef6add1fcc984d3a362ae15c367bba9676436fcdca55f7ad6e2eceab430e/diff:/var/lib/d
ocker/overlay2/d9de153113811b3df8d4baed3dd353a7c4b2c9bee35fa78eed53ff8ab7f1ce34/diff:/var/lib/docker/overlay2/86524ea93e9c9d991112eaf21837879652072f18e87a92455341b0ec29881813/diff:/var/lib/docker/overlay2/b22426385763418f65400a3d73e6c6911c3cd8cfec960a6a7a1bb0bda758ec0f/diff:/var/lib/docker/overlay2/8937f8c4d2e66e95c764f2343a427e554bc55edfeb88d222adfa7e6e0212fe20/diff:/var/lib/docker/overlay2/cdd6f0db8cc3c4204e0609b9e03f9b1570ca287816880fc4b076a18907a85545/diff:/var/lib/docker/overlay2/e2c94e205319cb64d8d70f9fac5f29dfe59443c395d5d1789658955dae9773dd/diff:/var/lib/docker/overlay2/9879d13d237b38d39eecb617e13443052223c204adbab0536b1e766a7530ddaf/diff:/var/lib/docker/overlay2/1819f58e7c3012d77d4db23a2e54d242fd11683241fd089717518d69ed060db4/diff:/var/lib/docker/overlay2/6cbae35a5b69c53fbeb8b40d3123340226003cd0681529a39428028c2e29e72a/diff:/var/lib/docker/overlay2/5317e9ab8a1225437112d0b6c87696c8d390b6af5b9cfa7d48a1f3deae7bd42d/diff:/var/lib/docker/overlay2/d211ca8a6f649bcb73b14e2c8166e0654a2fc9fe6d4c64fe1793ce498ad
50913/diff:/var/lib/docker/overlay2/34dc8334ce2f8fa75884a8395d4a4df8eb8c129ee26d07b0a2140dadfa04da6d/diff:/var/lib/docker/overlay2/7f6b4d183023134f547585810561212cec3292f070fb73c03894240c71a845f0/diff:/var/lib/docker/overlay2/be3b55969684fd305541c4486ba989a913094116c9a3dd8d0ba0b1efdedc05cd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a78b919176f9206dead87df90b8015669d118eee5dea0dea19feb49ad861b94f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a78b919176f9206dead87df90b8015669d118eee5dea0dea19feb49ad861b94f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a78b919176f9206dead87df90b8015669d118eee5dea0dea19feb49ad861b94f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-099237",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-099237/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-099237",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-099237",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-099237",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3550f996baf1fb217a75302efbd7bc14105e6aec7df8856c8a95c971eeb69cde",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32946"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32942"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32944"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32943"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3550f996baf1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-099237": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "40dfabe9a5ea",
	                        "missing-upgrade-099237"
	                    ],
	                    "NetworkID": "7cd4d71fdf38cd5fd8ae4b2e74bff55c9e37ae57de5a0f4d0e946c5fe1e9eae1",
	                    "EndpointID": "172ba3ee419584b255bcbb5599fb441e699f4bea25031d1222286c96467dfed3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-099237 -n missing-upgrade-099237
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-099237 -n missing-upgrade-099237: exit status 6 (338.345384ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:19:30.199515  123288 status.go:415] kubeconfig endpoint: got: 192.168.59.166:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-099237" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-099237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-099237
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-099237: (1.908693721s)
--- FAIL: TestMissingContainerUpgrade (185.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (79.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.4269436602.exe start -p stopped-upgrade-180271 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.4269436602.exe start -p stopped-upgrade-180271 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m10.341581545s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.4269436602.exe -p stopped-upgrade-180271 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.4269436602.exe -p stopped-upgrade-180271 stop: (1.98330571s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-180271 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-180271 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (7.040332261s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-180271] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-180271 in cluster stopped-upgrade-180271
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-180271" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:20:45.696459  127300 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:20:45.696650  127300 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:20:45.696678  127300 out.go:309] Setting ErrFile to fd 2...
	I1205 20:20:45.696699  127300 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:20:45.696954  127300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	I1205 20:20:45.697331  127300 out.go:303] Setting JSON to false
	I1205 20:20:45.698284  127300 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3792,"bootTime":1701803854,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 20:20:45.698380  127300 start.go:138] virtualization:  
	I1205 20:20:45.701139  127300 out.go:177] * [stopped-upgrade-180271] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1205 20:20:45.703341  127300 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:20:45.704982  127300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:20:45.703484  127300 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1205 20:20:45.703518  127300 notify.go:220] Checking for updates...
	I1205 20:20:45.709164  127300 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 20:20:45.711087  127300 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 20:20:45.713337  127300 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 20:20:45.715354  127300 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:20:45.717837  127300 config.go:182] Loaded profile config "stopped-upgrade-180271": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1205 20:20:45.720659  127300 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1205 20:20:45.722986  127300 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:20:45.785943  127300 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 20:20:45.786044  127300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:20:45.882740  127300 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1205 20:20:45.957484  127300 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-05 20:20:45.947392875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 20:20:45.957580  127300 docker.go:295] overlay module found
	I1205 20:20:45.959982  127300 out.go:177] * Using the docker driver based on existing profile
	I1205 20:20:45.961630  127300 start.go:298] selected driver: docker
	I1205 20:20:45.961647  127300 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-180271 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-180271 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.48 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1205 20:20:45.961734  127300 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:20:45.962281  127300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:20:46.076640  127300 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-05 20:20:46.065170039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 20:20:46.076975  127300 cni.go:84] Creating CNI manager for ""
	I1205 20:20:46.076993  127300 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 20:20:46.077006  127300 start_flags.go:323] config:
	{Name:stopped-upgrade-180271 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-180271 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.48 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1205 20:20:46.079143  127300 out.go:177] * Starting control plane node stopped-upgrade-180271 in cluster stopped-upgrade-180271
	I1205 20:20:46.080649  127300 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 20:20:46.082300  127300 out.go:177] * Pulling base image ...
	I1205 20:20:46.083918  127300 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1205 20:20:46.084080  127300 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1205 20:20:46.104515  127300 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1205 20:20:46.104536  127300 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1205 20:20:46.157322  127300 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1205 20:20:46.157467  127300 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/stopped-upgrade-180271/config.json ...
	I1205 20:20:46.157599  127300 cache.go:107] acquiring lock: {Name:mk8a4de1334950434f49dfbc7cc0e43bfdbdb2f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:20:46.157687  127300 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 20:20:46.157698  127300 cache.go:194] Successfully downloaded all kic artifacts
	I1205 20:20:46.157708  127300 cache.go:107] acquiring lock: {Name:mka6142c8784c3eb00c6bbf3953947386497eac0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:20:46.157735  127300 start.go:365] acquiring machines lock for stopped-upgrade-180271: {Name:mk9221fdf3285de3acd39e54b9b7765c7f027b58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:20:46.157740  127300 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1205 20:20:46.157751  127300 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 42.224µs
	I1205 20:20:46.157759  127300 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1205 20:20:46.157768  127300 cache.go:107] acquiring lock: {Name:mk52e13bce5a8ae7e8a902fdeaaeade9860de835 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:20:46.157774  127300 start.go:369] acquired machines lock for "stopped-upgrade-180271" in 26.486µs
	I1205 20:20:46.157788  127300 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:20:46.157793  127300 fix.go:54] fixHost starting: 
	I1205 20:20:46.157794  127300 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1205 20:20:46.157800  127300 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 32.55µs
	I1205 20:20:46.157806  127300 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1205 20:20:46.157814  127300 cache.go:107] acquiring lock: {Name:mkb0e889217661a7ef4acaf28adec14f1a06cff8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:20:46.157838  127300 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1205 20:20:46.157843  127300 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 29.243µs
	I1205 20:20:46.157849  127300 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1205 20:20:46.157859  127300 cache.go:107] acquiring lock: {Name:mk0d29774b4c5a0581fe381cfea74b828beff54c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:20:46.157882  127300 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1205 20:20:46.157887  127300 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 28.898µs
	I1205 20:20:46.157893  127300 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1205 20:20:46.157902  127300 cache.go:107] acquiring lock: {Name:mk09e84b1799fc084fd06411f9276e8b348e91e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:20:46.157926  127300 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1205 20:20:46.157930  127300 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 29.136µs
	I1205 20:20:46.157936  127300 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1205 20:20:46.157947  127300 cache.go:107] acquiring lock: {Name:mk546e34e8abee7d5a5f944e7950213bca6c089c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:20:46.157971  127300 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1205 20:20:46.157975  127300 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 32.599µs
	I1205 20:20:46.157981  127300 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1205 20:20:46.157989  127300 cache.go:107] acquiring lock: {Name:mk1fa1653fd02fdb26ff9477a9afc4d03e380d8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:20:46.158011  127300 cache.go:115] /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1205 20:20:46.158015  127300 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 27.939µs
	I1205 20:20:46.158022  127300 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1205 20:20:46.157695  127300 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.434µs
	I1205 20:20:46.158029  127300 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 20:20:46.158034  127300 cache.go:87] Successfully saved all images to host disk.
	I1205 20:20:46.158041  127300 cli_runner.go:164] Run: docker container inspect stopped-upgrade-180271 --format={{.State.Status}}
	I1205 20:20:46.176592  127300 fix.go:102] recreateIfNeeded on stopped-upgrade-180271: state=Stopped err=<nil>
	W1205 20:20:46.176619  127300 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:20:46.180084  127300 out.go:177] * Restarting existing docker container for "stopped-upgrade-180271" ...
	I1205 20:20:46.182123  127300 cli_runner.go:164] Run: docker start stopped-upgrade-180271
	I1205 20:20:46.589894  127300 cli_runner.go:164] Run: docker container inspect stopped-upgrade-180271 --format={{.State.Status}}
	I1205 20:20:46.623481  127300 kic.go:430] container "stopped-upgrade-180271" state is running.
	I1205 20:20:46.624031  127300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-180271
	I1205 20:20:46.652708  127300 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/stopped-upgrade-180271/config.json ...
	I1205 20:20:46.652932  127300 machine.go:88] provisioning docker machine ...
	I1205 20:20:46.652947  127300 ubuntu.go:169] provisioning hostname "stopped-upgrade-180271"
	I1205 20:20:46.652994  127300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-180271
	I1205 20:20:46.682729  127300 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:46.683171  127300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32954 <nil> <nil>}
	I1205 20:20:46.683184  127300 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-180271 && echo "stopped-upgrade-180271" | sudo tee /etc/hostname
	I1205 20:20:46.684467  127300 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34472->127.0.0.1:32954: read: connection reset by peer
	I1205 20:20:49.851807  127300 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-180271
	
	I1205 20:20:49.851884  127300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-180271
	I1205 20:20:49.880300  127300 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:49.880690  127300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32954 <nil> <nil>}
	I1205 20:20:49.880708  127300 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-180271' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-180271/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-180271' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:20:50.029231  127300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:20:50.029294  127300 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17731-2478/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-2478/.minikube}
	I1205 20:20:50.029335  127300 ubuntu.go:177] setting up certificates
	I1205 20:20:50.029371  127300 provision.go:83] configureAuth start
	I1205 20:20:50.029453  127300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-180271
	I1205 20:20:50.055496  127300 provision.go:138] copyHostCerts
	I1205 20:20:50.055558  127300 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem, removing ...
	I1205 20:20:50.055577  127300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem
	I1205 20:20:50.055648  127300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/ca.pem (1078 bytes)
	I1205 20:20:50.055884  127300 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem, removing ...
	I1205 20:20:50.055894  127300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem
	I1205 20:20:50.055935  127300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/cert.pem (1123 bytes)
	I1205 20:20:50.055999  127300 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem, removing ...
	I1205 20:20:50.056004  127300 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem
	I1205 20:20:50.056030  127300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-2478/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-2478/.minikube/key.pem (1679 bytes)
	I1205 20:20:50.056380  127300 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-180271 san=[192.168.59.48 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-180271]
	I1205 20:20:50.951729  127300 provision.go:172] copyRemoteCerts
	I1205 20:20:50.951828  127300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:20:50.951875  127300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-180271
	I1205 20:20:50.969308  127300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/stopped-upgrade-180271/id_rsa Username:docker}
	I1205 20:20:51.068555  127300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:20:51.091751  127300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:20:51.114335  127300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:20:51.136023  127300 provision.go:86] duration metric: configureAuth took 1.106624213s
	I1205 20:20:51.136059  127300 ubuntu.go:193] setting minikube options for container-runtime
	I1205 20:20:51.136228  127300 config.go:182] Loaded profile config "stopped-upgrade-180271": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1205 20:20:51.136335  127300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-180271
	I1205 20:20:51.155911  127300 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:51.156312  127300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32954 <nil> <nil>}
	I1205 20:20:51.156327  127300 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:20:51.575638  127300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:20:51.575659  127300 machine.go:91] provisioned docker machine in 4.922718396s
	I1205 20:20:51.575670  127300 start.go:300] post-start starting for "stopped-upgrade-180271" (driver="docker")
	I1205 20:20:51.575680  127300 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:20:51.575776  127300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:20:51.575822  127300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-180271
	I1205 20:20:51.596978  127300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/stopped-upgrade-180271/id_rsa Username:docker}
	I1205 20:20:51.700079  127300 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:20:51.703563  127300 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 20:20:51.703585  127300 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 20:20:51.703598  127300 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 20:20:51.703605  127300 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1205 20:20:51.703619  127300 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/addons for local assets ...
	I1205 20:20:51.703675  127300 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-2478/.minikube/files for local assets ...
	I1205 20:20:51.703778  127300 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem -> 77732.pem in /etc/ssl/certs
	I1205 20:20:51.703884  127300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:20:51.711607  127300 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/ssl/certs/77732.pem --> /etc/ssl/certs/77732.pem (1708 bytes)
	I1205 20:20:51.731922  127300 start.go:303] post-start completed in 156.237932ms
	I1205 20:20:51.731992  127300 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:20:51.732036  127300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-180271
	I1205 20:20:51.749123  127300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/stopped-upgrade-180271/id_rsa Username:docker}
	I1205 20:20:51.845478  127300 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 20:20:51.850315  127300 fix.go:56] fixHost completed within 5.692515219s
	I1205 20:20:51.850335  127300 start.go:83] releasing machines lock for "stopped-upgrade-180271", held for 5.692552979s
	I1205 20:20:51.850414  127300 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-180271
	I1205 20:20:51.867736  127300 ssh_runner.go:195] Run: cat /version.json
	I1205 20:20:51.867816  127300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-180271
	I1205 20:20:51.868035  127300 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:20:51.868075  127300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-180271
	I1205 20:20:51.889220  127300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/stopped-upgrade-180271/id_rsa Username:docker}
	I1205 20:20:51.890866  127300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/stopped-upgrade-180271/id_rsa Username:docker}
	W1205 20:20:52.050310  127300 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1205 20:20:52.050392  127300 ssh_runner.go:195] Run: systemctl --version
	I1205 20:20:52.055570  127300 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:20:52.145937  127300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:20:52.151220  127300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:20:52.172607  127300 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1205 20:20:52.172709  127300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:20:52.199176  127300 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:20:52.199200  127300 start.go:475] detecting cgroup driver to use...
	I1205 20:20:52.199237  127300 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1205 20:20:52.199289  127300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:20:52.227958  127300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:20:52.239840  127300 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:20:52.239900  127300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:20:52.251653  127300 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:20:52.263434  127300 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1205 20:20:52.275474  127300 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1205 20:20:52.275582  127300 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:20:52.382998  127300 docker.go:219] disabling docker service ...
	I1205 20:20:52.383079  127300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:20:52.394740  127300 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:20:52.405125  127300 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:20:52.502726  127300 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:20:52.614959  127300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:20:52.627293  127300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:20:52.644032  127300 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:20:52.644110  127300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:52.657277  127300 out.go:177] 
	W1205 20:20:52.659040  127300 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1205 20:20:52.659057  127300 out.go:239] * 
	* 
	W1205 20:20:52.660059  127300 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:20:52.661899  127300 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-180271 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (79.37s)

                                                
                                    

Test pass (276/315)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 20.66
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 13.78
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.1/json-events 20.46
18 TestDownloadOnly/v1.29.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.1/LogsDuration 0.09
23 TestDownloadOnly/DeleteAll 0.24
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
26 TestBinaryMirror 0.6
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
32 TestAddons/Setup 149.17
34 TestAddons/parallel/Registry 15.36
37 TestAddons/parallel/MetricsServer 5.8
40 TestAddons/parallel/CSI 69.65
41 TestAddons/parallel/Headlamp 11.47
42 TestAddons/parallel/CloudSpanner 5.65
43 TestAddons/parallel/LocalPath 51.89
44 TestAddons/parallel/NvidiaDevicePlugin 5.58
47 TestAddons/serial/GCPAuth/Namespaces 0.18
48 TestAddons/StoppedEnableDisable 12.36
49 TestCertOptions 35.25
50 TestCertExpiration 244.71
52 TestForceSystemdFlag 39.05
53 TestForceSystemdEnv 42.27
59 TestErrorSpam/setup 33.67
60 TestErrorSpam/start 0.83
61 TestErrorSpam/status 1.12
62 TestErrorSpam/pause 1.83
63 TestErrorSpam/unpause 1.9
64 TestErrorSpam/stop 1.5
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 45.45
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 32.14
71 TestFunctional/serial/KubeContext 0.07
72 TestFunctional/serial/KubectlGetPods 0.12
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.64
76 TestFunctional/serial/CacheCmd/cache/add_local 1.06
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
78 TestFunctional/serial/CacheCmd/cache/list 0.07
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
81 TestFunctional/serial/CacheCmd/cache/delete 0.14
82 TestFunctional/serial/MinikubeKubectlCmd 0.16
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
84 TestFunctional/serial/ExtraConfig 32.98
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.75
87 TestFunctional/serial/LogsFileCmd 1.72
88 TestFunctional/serial/InvalidService 4.79
90 TestFunctional/parallel/ConfigCmd 0.63
91 TestFunctional/parallel/DashboardCmd 9.96
92 TestFunctional/parallel/DryRun 0.51
93 TestFunctional/parallel/InternationalLanguage 0.22
94 TestFunctional/parallel/StatusCmd 1.26
98 TestFunctional/parallel/ServiceCmdConnect 10.73
99 TestFunctional/parallel/AddonsCmd 0.26
100 TestFunctional/parallel/PersistentVolumeClaim 25.06
102 TestFunctional/parallel/SSHCmd 0.78
103 TestFunctional/parallel/CpCmd 1.61
105 TestFunctional/parallel/FileSync 0.36
106 TestFunctional/parallel/CertSync 2.37
110 TestFunctional/parallel/NodeLabels 0.11
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.85
114 TestFunctional/parallel/License 0.44
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.4
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ServiceCmd/DeployApp 8.26
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
128 TestFunctional/parallel/ProfileCmd/profile_list 0.41
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
130 TestFunctional/parallel/MountCmd/any-port 8.49
131 TestFunctional/parallel/ServiceCmd/List 0.59
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
134 TestFunctional/parallel/ServiceCmd/Format 0.41
135 TestFunctional/parallel/ServiceCmd/URL 0.45
136 TestFunctional/parallel/MountCmd/specific-port 2.42
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.4
138 TestFunctional/parallel/Version/short 0.1
139 TestFunctional/parallel/Version/components 1
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
145 TestFunctional/parallel/ImageCommands/Setup 2.54
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
149 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.86
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.84
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.88
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.95
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.22
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.02
156 TestFunctional/delete_addon-resizer_images 0.08
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestIngressAddonLegacy/StartLegacyK8sCluster 99.06
164 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.05
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.68
169 TestJSONOutput/start/Command 77.5
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.79
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.72
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 5.86
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.28
194 TestKicCustomNetwork/create_custom_network 47.52
195 TestKicCustomNetwork/use_default_bridge_network 37.04
196 TestKicExistingNetwork 37.03
197 TestKicCustomSubnet 33.81
198 TestKicStaticIP 33.99
199 TestMainNoArgs 0.06
200 TestMinikubeProfile 68.66
203 TestMountStart/serial/StartWithMountFirst 9.58
204 TestMountStart/serial/VerifyMountFirst 0.3
205 TestMountStart/serial/StartWithMountSecond 6.95
206 TestMountStart/serial/VerifyMountSecond 0.3
207 TestMountStart/serial/DeleteFirst 1.65
208 TestMountStart/serial/VerifyMountPostDelete 0.3
209 TestMountStart/serial/Stop 1.22
210 TestMountStart/serial/RestartStopped 7.9
211 TestMountStart/serial/VerifyMountPostStop 0.3
214 TestMultiNode/serial/FreshStart2Nodes 123.97
215 TestMultiNode/serial/DeployApp2Nodes 6.74
217 TestMultiNode/serial/AddNode 48.37
218 TestMultiNode/serial/MultiNodeLabels 0.09
219 TestMultiNode/serial/ProfileList 0.36
220 TestMultiNode/serial/CopyFile 10.98
221 TestMultiNode/serial/StopNode 2.35
222 TestMultiNode/serial/StartAfterStop 12.55
223 TestMultiNode/serial/RestartKeepsNodes 123.83
224 TestMultiNode/serial/DeleteNode 5.31
225 TestMultiNode/serial/StopMultiNode 24.01
226 TestMultiNode/serial/RestartMultiNode 81.8
227 TestMultiNode/serial/ValidateNameConflict 32.39
232 TestPreload 173.36
234 TestScheduledStopUnix 107.74
237 TestInsufficientStorage 11.38
240 TestKubernetesUpgrade 392.11
243 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
244 TestNoKubernetes/serial/StartWithK8s 39.1
245 TestNoKubernetes/serial/StartWithStopK8s 16.43
246 TestNoKubernetes/serial/Start 7.65
247 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
248 TestNoKubernetes/serial/ProfileList 1.05
249 TestNoKubernetes/serial/Stop 1.29
250 TestNoKubernetes/serial/StartNoArgs 7.35
251 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
252 TestStoppedBinaryUpgrade/Setup 1.18
254 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
263 TestPause/serial/Start 56.07
264 TestPause/serial/SecondStartNoReconfiguration 29.5
265 TestPause/serial/Pause 1.1
266 TestPause/serial/VerifyStatus 0.46
267 TestPause/serial/Unpause 0.73
268 TestPause/serial/PauseAgain 1.37
269 TestPause/serial/DeletePaused 3.31
270 TestPause/serial/VerifyDeletedResources 0.48
278 TestNetworkPlugins/group/false 5.44
283 TestStartStop/group/old-k8s-version/serial/FirstStart 125.68
284 TestStartStop/group/old-k8s-version/serial/DeployApp 11.54
285 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.01
286 TestStartStop/group/old-k8s-version/serial/Stop 12.13
287 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
288 TestStartStop/group/old-k8s-version/serial/SecondStart 444.59
290 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.73
291 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.45
292 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
293 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
294 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
295 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 354.87
296 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
297 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
298 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
299 TestStartStop/group/old-k8s-version/serial/Pause 4.23
301 TestStartStop/group/embed-certs/serial/FirstStart 83.36
302 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 18.05
303 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
304 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
305 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.37
307 TestStartStop/group/no-preload/serial/FirstStart 64.54
308 TestStartStop/group/embed-certs/serial/DeployApp 10.61
309 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.89
310 TestStartStop/group/embed-certs/serial/Stop 12.36
311 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/embed-certs/serial/SecondStart 350.13
313 TestStartStop/group/no-preload/serial/DeployApp 10.1
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.48
315 TestStartStop/group/no-preload/serial/Stop 12.27
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/no-preload/serial/SecondStart 360.58
318 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.04
319 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
320 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.41
321 TestStartStop/group/embed-certs/serial/Pause 4.82
323 TestStartStop/group/newest-cni/serial/FirstStart 51.13
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.03
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
326 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
327 TestStartStop/group/no-preload/serial/Pause 3.59
328 TestNetworkPlugins/group/auto/Start 84.99
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.66
331 TestStartStop/group/newest-cni/serial/Stop 1.44
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.44
333 TestStartStop/group/newest-cni/serial/SecondStart 33.89
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
337 TestStartStop/group/newest-cni/serial/Pause 3.41
338 TestNetworkPlugins/group/kindnet/Start 77.33
339 TestNetworkPlugins/group/auto/KubeletFlags 0.34
340 TestNetworkPlugins/group/auto/NetCatPod 11.39
341 TestNetworkPlugins/group/auto/DNS 0.23
342 TestNetworkPlugins/group/auto/Localhost 0.21
343 TestNetworkPlugins/group/auto/HairPin 0.21
344 TestNetworkPlugins/group/calico/Start 71.39
345 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
347 TestNetworkPlugins/group/kindnet/NetCatPod 12.38
348 TestNetworkPlugins/group/kindnet/DNS 0.27
349 TestNetworkPlugins/group/kindnet/Localhost 0.25
350 TestNetworkPlugins/group/kindnet/HairPin 0.25
351 TestNetworkPlugins/group/custom-flannel/Start 72.32
352 TestNetworkPlugins/group/calico/ControllerPod 5.05
353 TestNetworkPlugins/group/calico/KubeletFlags 0.42
354 TestNetworkPlugins/group/calico/NetCatPod 12.45
355 TestNetworkPlugins/group/calico/DNS 0.25
356 TestNetworkPlugins/group/calico/Localhost 0.21
357 TestNetworkPlugins/group/calico/HairPin 0.19
358 TestNetworkPlugins/group/enable-default-cni/Start 90.3
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.61
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.66
361 TestNetworkPlugins/group/custom-flannel/DNS 0.24
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
364 TestNetworkPlugins/group/flannel/Start 63.04
365 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.48
366 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.54
367 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
368 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
369 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
370 TestNetworkPlugins/group/flannel/ControllerPod 5.03
371 TestNetworkPlugins/group/flannel/KubeletFlags 0.49
372 TestNetworkPlugins/group/flannel/NetCatPod 11.49
373 TestNetworkPlugins/group/bridge/Start 49.57
374 TestNetworkPlugins/group/flannel/DNS 0.3
375 TestNetworkPlugins/group/flannel/Localhost 0.24
376 TestNetworkPlugins/group/flannel/HairPin 0.27
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
378 TestNetworkPlugins/group/bridge/NetCatPod 11.31
379 TestNetworkPlugins/group/bridge/DNS 0.2
380 TestNetworkPlugins/group/bridge/Localhost 0.18
381 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (20.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-855824 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-855824 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (20.657852797s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (20.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-855824
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-855824: exit status 85 (85.665743ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-855824 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-855824        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:34:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:34:54.911696    7778 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:34:54.911873    7778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:34:54.911896    7778 out.go:309] Setting ErrFile to fd 2...
	I1205 19:34:54.911914    7778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:34:54.912169    7778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	W1205 19:34:54.912337    7778 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17731-2478/.minikube/config/config.json: open /home/jenkins/minikube-integration/17731-2478/.minikube/config/config.json: no such file or directory
	I1205 19:34:54.912776    7778 out.go:303] Setting JSON to true
	I1205 19:34:54.913572    7778 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1041,"bootTime":1701803854,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 19:34:54.913637    7778 start.go:138] virtualization:  
	I1205 19:34:54.916610    7778 out.go:97] [download-only-855824] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1205 19:34:54.918501    7778 out.go:169] MINIKUBE_LOCATION=17731
	W1205 19:34:54.916792    7778 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 19:34:54.916857    7778 notify.go:220] Checking for updates...
	I1205 19:34:54.920431    7778 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:34:54.922463    7778 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 19:34:54.924597    7778 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 19:34:54.926511    7778 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1205 19:34:54.930280    7778 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:34:54.930506    7778 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:34:54.954044    7778 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:34:54.954144    7778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:34:55.302773    7778 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-12-05 19:34:55.293203145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:34:55.302877    7778 docker.go:295] overlay module found
	I1205 19:34:55.304927    7778 out.go:97] Using the docker driver based on user configuration
	I1205 19:34:55.304947    7778 start.go:298] selected driver: docker
	I1205 19:34:55.304953    7778 start.go:902] validating driver "docker" against <nil>
	I1205 19:34:55.305044    7778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:34:55.370236    7778 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-12-05 19:34:55.361449445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:34:55.370387    7778 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:34:55.370681    7778 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1205 19:34:55.370857    7778 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 19:34:55.373258    7778 out.go:169] Using Docker driver with root privileges
	I1205 19:34:55.375279    7778 cni.go:84] Creating CNI manager for ""
	I1205 19:34:55.375295    7778 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:34:55.375306    7778 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:34:55.375320    7778 start_flags.go:323] config:
	{Name:download-only-855824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-855824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:34:55.377348    7778 out.go:97] Starting control plane node download-only-855824 in cluster download-only-855824
	I1205 19:34:55.377367    7778 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:34:55.379446    7778 out.go:97] Pulling base image ...
	I1205 19:34:55.379467    7778 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 19:34:55.379620    7778 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:34:55.396282    7778 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1205 19:34:55.396480    7778 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1205 19:34:55.396581    7778 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1205 19:34:55.463615    7778 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1205 19:34:55.463639    7778 cache.go:56] Caching tarball of preloaded images
	I1205 19:34:55.463796    7778 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 19:34:55.466001    7778 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1205 19:34:55.466023    7778 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1205 19:34:55.580460    7778 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1205 19:35:08.747576    7778 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1205 19:35:08.747693    7778 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1205 19:35:09.735836    7778 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1205 19:35:09.736220    7778 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/download-only-855824/config.json ...
	I1205 19:35:09.736265    7778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/download-only-855824/config.json: {Name:mk35b5cd9c6dfcdf00483161c9fc5c2976d3f866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:09.736452    7778 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 19:35:09.736686    7778 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/linux/arm64/v1.16.0/kubectl
	I1205 19:35:13.570609    7778 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-855824"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (13.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-855824 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-855824 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.776797919s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (13.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-855824
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-855824: exit status 85 (85.216605ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-855824 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-855824        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-855824 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |          |
	|         | -p download-only-855824        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:35:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:35:15.655869    7852 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:35:15.656075    7852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:15.656086    7852 out.go:309] Setting ErrFile to fd 2...
	I1205 19:35:15.656092    7852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:15.656384    7852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	W1205 19:35:15.656520    7852 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17731-2478/.minikube/config/config.json: open /home/jenkins/minikube-integration/17731-2478/.minikube/config/config.json: no such file or directory
	I1205 19:35:15.656784    7852 out.go:303] Setting JSON to true
	I1205 19:35:15.657715    7852 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1062,"bootTime":1701803854,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 19:35:15.657789    7852 start.go:138] virtualization:  
	I1205 19:35:15.660298    7852 out.go:97] [download-only-855824] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1205 19:35:15.662337    7852 out.go:169] MINIKUBE_LOCATION=17731
	I1205 19:35:15.660608    7852 notify.go:220] Checking for updates...
	I1205 19:35:15.664934    7852 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:35:15.667038    7852 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 19:35:15.668794    7852 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 19:35:15.670614    7852 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1205 19:35:15.674324    7852 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:35:15.674845    7852 config.go:182] Loaded profile config "download-only-855824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1205 19:35:15.674891    7852 start.go:810] api.Load failed for download-only-855824: filestore "download-only-855824": Docker machine "download-only-855824" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1205 19:35:15.674998    7852 driver.go:392] Setting default libvirt URI to qemu:///system
	W1205 19:35:15.675034    7852 start.go:810] api.Load failed for download-only-855824: filestore "download-only-855824": Docker machine "download-only-855824" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1205 19:35:15.698221    7852 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:35:15.698325    7852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:15.775000    7852 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-05 19:35:15.765618442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:35:15.775093    7852 docker.go:295] overlay module found
	I1205 19:35:15.793361    7852 out.go:97] Using the docker driver based on existing profile
	I1205 19:35:15.793401    7852 start.go:298] selected driver: docker
	I1205 19:35:15.793408    7852 start.go:902] validating driver "docker" against &{Name:download-only-855824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-855824 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:15.793582    7852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:15.866341    7852 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-05 19:35:15.857538206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:35:15.866759    7852 cni.go:84] Creating CNI manager for ""
	I1205 19:35:15.866779    7852 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:35:15.866791    7852 start_flags.go:323] config:
	{Name:download-only-855824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-855824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1205 19:35:15.868885    7852 out.go:97] Starting control plane node download-only-855824 in cluster download-only-855824
	I1205 19:35:15.868903    7852 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:35:15.870762    7852 out.go:97] Pulling base image ...
	I1205 19:35:15.870784    7852 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:15.870894    7852 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:35:15.888415    7852 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1205 19:35:15.888539    7852 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1205 19:35:15.888560    7852 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory, skipping pull
	I1205 19:35:15.888565    7852 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in cache, skipping pull
	I1205 19:35:15.888573    7852 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	I1205 19:35:15.951069    7852 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1205 19:35:15.951098    7852 cache.go:56] Caching tarball of preloaded images
	I1205 19:35:15.951259    7852 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:15.953445    7852 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1205 19:35:15.953465    7852 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I1205 19:35:16.063662    7852 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-855824"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/json-events (20.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-855824 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-855824 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (20.454892301s)
--- PASS: TestDownloadOnly/v1.29.0-rc.1/json-events (20.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-855824
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-855824: exit status 85 (91.588127ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-855824 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-855824           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-855824 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |          |
	|         | -p download-only-855824           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-855824 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |          |
	|         | -p download-only-855824           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:35:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:35:29.520686    7926 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:35:29.520905    7926 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:29.520915    7926 out.go:309] Setting ErrFile to fd 2...
	I1205 19:35:29.520921    7926 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:29.521265    7926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	W1205 19:35:29.521442    7926 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17731-2478/.minikube/config/config.json: open /home/jenkins/minikube-integration/17731-2478/.minikube/config/config.json: no such file or directory
	I1205 19:35:29.521719    7926 out.go:303] Setting JSON to true
	I1205 19:35:29.522520    7926 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1076,"bootTime":1701803854,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 19:35:29.522587    7926 start.go:138] virtualization:  
	I1205 19:35:29.524714    7926 out.go:97] [download-only-855824] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1205 19:35:29.526591    7926 out.go:169] MINIKUBE_LOCATION=17731
	I1205 19:35:29.524988    7926 notify.go:220] Checking for updates...
	I1205 19:35:29.528597    7926 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:35:29.530555    7926 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 19:35:29.532460    7926 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 19:35:29.534444    7926 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1205 19:35:29.538445    7926 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:35:29.539015    7926 config.go:182] Loaded profile config "download-only-855824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1205 19:35:29.539113    7926 start.go:810] api.Load failed for download-only-855824: filestore "download-only-855824": Docker machine "download-only-855824" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1205 19:35:29.539229    7926 driver.go:392] Setting default libvirt URI to qemu:///system
	W1205 19:35:29.539257    7926 start.go:810] api.Load failed for download-only-855824: filestore "download-only-855824": Docker machine "download-only-855824" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1205 19:35:29.562704    7926 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:35:29.562812    7926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:29.649921    7926 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-05 19:35:29.640396003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:35:29.650025    7926 docker.go:295] overlay module found
	I1205 19:35:29.651873    7926 out.go:97] Using the docker driver based on existing profile
	I1205 19:35:29.651895    7926 start.go:298] selected driver: docker
	I1205 19:35:29.651901    7926 start.go:902] validating driver "docker" against &{Name:download-only-855824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-855824 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:29.652059    7926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:35:29.728138    7926 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-05 19:35:29.719303191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:35:29.728579    7926 cni.go:84] Creating CNI manager for ""
	I1205 19:35:29.728599    7926 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1205 19:35:29.728610    7926 start_flags.go:323] config:
	{Name:download-only-855824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:download-only-855824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1205 19:35:29.730660    7926 out.go:97] Starting control plane node download-only-855824 in cluster download-only-855824
	I1205 19:35:29.730688    7926 cache.go:121] Beginning downloading kic base image for docker with crio
	I1205 19:35:29.732528    7926 out.go:97] Pulling base image ...
	I1205 19:35:29.732551    7926 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 19:35:29.732716    7926 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local docker daemon
	I1205 19:35:29.748948    7926 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f to local cache
	I1205 19:35:29.749082    7926 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory
	I1205 19:35:29.749106    7926 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f in local cache directory, skipping pull
	I1205 19:35:29.749113    7926 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f exists in cache, skipping pull
	I1205 19:35:29.749121    7926 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f as a tarball
	I1205 19:35:29.794569    7926 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1205 19:35:29.794596    7926 cache.go:56] Caching tarball of preloaded images
	I1205 19:35:29.794737    7926 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 19:35:29.796766    7926 out.go:97] Downloading Kubernetes v1.29.0-rc.1 preload ...
	I1205 19:35:29.796784    7926 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-arm64.tar.lz4 ...
	I1205 19:35:29.909721    7926 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:a062174e9404cf628b661eb179f470ab -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I1205 19:35:41.280821    7926 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-arm64.tar.lz4 ...
	I1205 19:35:41.280983    7926 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17731-2478/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-arm64.tar.lz4 ...
	I1205 19:35:42.143131    7926 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on crio
	I1205 19:35:42.143300    7926 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/download-only-855824/config.json ...
	I1205 19:35:42.143589    7926 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 19:35:42.143795    7926 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17731-2478/.minikube/cache/linux/arm64/v1.29.0-rc.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-855824"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-855824
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-741946 --alsologtostderr --binary-mirror http://127.0.0.1:32795 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-741946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-741946
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-753790
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-753790: exit status 85 (87.067655ms)

                                                
                                                
-- stdout --
	* Profile "addons-753790" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-753790"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-753790
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-753790: exit status 85 (86.067936ms)

                                                
                                                
-- stdout --
	* Profile "addons-753790" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-753790"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (149.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-753790 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-753790 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m29.17408067s)
--- PASS: TestAddons/Setup (149.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 37.92359ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-j6vr2" [2025c2db-46b4-422f-bf24-e183c416a7ae] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019407265s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6gp6x" [a29e840a-e254-486b-98ae-b646b95120f2] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012017176s
addons_test.go:339: (dbg) Run:  kubectl --context addons-753790 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-753790 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-753790 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.230151095s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 ip
2023/12/05 19:38:35 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.36s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 8.956382ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-5nn9m" [dfdc10e3-f82d-4c2f-b28e-d02c4992cbd7] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011543308s
addons_test.go:414: (dbg) Run:  kubectl --context addons-753790 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 47.349601ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-753790 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-753790 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bbb7fd94-88c7-493f-905f-b22582fa5260] Pending
helpers_test.go:344: "task-pv-pod" [bbb7fd94-88c7-493f-905f-b22582fa5260] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bbb7fd94-88c7-493f-905f-b22582fa5260] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.019490796s
addons_test.go:583: (dbg) Run:  kubectl --context addons-753790 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-753790 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-753790 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-753790 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-753790 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-753790 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-753790 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6d9f0134-5ee2-4c31-9adf-55df0d06489e] Pending
helpers_test.go:344: "task-pv-pod-restore" [6d9f0134-5ee2-4c31-9adf-55df0d06489e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6d9f0134-5ee2-4c31-9adf-55df0d06489e] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.014334771s
addons_test.go:625: (dbg) Run:  kubectl --context addons-753790 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-753790 delete pod task-pv-pod-restore: (1.128432923s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-753790 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-753790 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-753790 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.75143565s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (69.65s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-753790 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-753790 --alsologtostderr -v=1: (1.449903205s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-4wt8j" [27bc25b9-0ab0-4356-b9e8-aaf74eff4014] Pending
helpers_test.go:344: "headlamp-777fd4b855-4wt8j" [27bc25b9-0ab0-4356-b9e8-aaf74eff4014] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-4wt8j" [27bc25b9-0ab0-4356-b9e8-aaf74eff4014] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.018358979s
--- PASS: TestAddons/parallel/Headlamp (11.47s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-d8xzq" [b7681896-2064-4af1-ac34-df7a6d6aecc7] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.016112889s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-753790
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.89s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-753790 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-753790 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753790 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7345186c-64aa-4130-86c9-da64812dc7bf] Pending
helpers_test.go:344: "test-local-path" [7345186c-64aa-4130-86c9-da64812dc7bf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7345186c-64aa-4130-86c9-da64812dc7bf] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7345186c-64aa-4130-86c9-da64812dc7bf] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.010963339s
addons_test.go:890: (dbg) Run:  kubectl --context addons-753790 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 ssh "cat /opt/local-path-provisioner/pvc-3d274b4a-eada-4209-8083-82421c6fefec_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-753790 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-753790 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-753790 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-753790 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.284448286s)
--- PASS: TestAddons/parallel/LocalPath (51.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5g44z" [e67179c1-2a66-42ab-af09-92698daea73e] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.016793107s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-753790
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-753790 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-753790 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-753790
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-753790: (12.048936161s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-753790
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-753790
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-753790
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (35.25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-553887 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-553887 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.545450097s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-553887 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-553887 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-553887 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-553887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-553887
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-553887: (2.021260326s)
--- PASS: TestCertOptions (35.25s)

                                                
                                    
x
+
TestCertExpiration (244.71s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-023467 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-023467 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (37.43981421s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-023467 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-023467 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (24.763615251s)
helpers_test.go:175: Cleaning up "cert-expiration-023467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-023467
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-023467: (2.508204865s)
--- PASS: TestCertExpiration (244.71s)

                                                
                                    
x
+
TestForceSystemdFlag (39.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-415679 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-415679 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.040743307s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-415679 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-415679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-415679
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-415679: (2.59923663s)
--- PASS: TestForceSystemdFlag (39.05s)

                                                
                                    
x
+
TestForceSystemdEnv (42.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-397469 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-397469 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.495725781s)
helpers_test.go:175: Cleaning up "force-systemd-env-397469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-397469
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-397469: (2.775218435s)
--- PASS: TestForceSystemdEnv (42.27s)

                                                
                                    
x
+
TestErrorSpam/setup (33.67s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-412262 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-412262 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-412262 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-412262 --driver=docker  --container-runtime=crio: (33.673845152s)
--- PASS: TestErrorSpam/setup (33.67s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 unpause
--- PASS: TestErrorSpam/unpause (1.90s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 stop: (1.28636104s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-412262 --log_dir /tmp/nospam-412262 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17731-2478/.minikube/files/etc/test/nested/copy/7773/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-025502 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-025502 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (45.451631199s)
--- PASS: TestFunctional/serial/StartWithProxy (45.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-025502 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-025502 --alsologtostderr -v=8: (32.13154641s)
functional_test.go:659: soft start took 32.135900665s for "functional-025502" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-025502 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-025502 cache add registry.k8s.io/pause:3.1: (1.247896554s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-025502 cache add registry.k8s.io/pause:3.3: (1.253819308s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-025502 cache add registry.k8s.io/pause:latest: (1.135204629s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-025502 /tmp/TestFunctionalserialCacheCmdcacheadd_local2402820140/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 cache add minikube-local-cache-test:functional-025502
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 cache delete minikube-local-cache-test:functional-025502
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-025502
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-025502 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (340.213536ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-025502 cache reload: (1.023257319s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 kubectl -- --context functional-025502 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-025502 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-025502 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-025502 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.981150616s)
functional_test.go:757: restart took 32.981248028s for "functional-025502" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-025502 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-025502 logs: (1.751843333s)
--- PASS: TestFunctional/serial/LogsCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 logs --file /tmp/TestFunctionalserialLogsFileCmd798947551/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-025502 logs --file /tmp/TestFunctionalserialLogsFileCmd798947551/001/logs.txt: (1.713397326s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-025502 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-025502
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-025502: exit status 115 (591.757979ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30903 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-025502 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-025502 config get cpus: exit status 14 (114.021916ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-025502 config get cpus: exit status 14 (127.932412ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-025502 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-025502 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 33353: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-025502 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-025502 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (215.706359ms)

                                                
                                                
-- stdout --
	* [functional-025502] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:51:47.825503   32962 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:51:47.825680   32962 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:51:47.825692   32962 out.go:309] Setting ErrFile to fd 2...
	I1205 19:51:47.825707   32962 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:51:47.825989   32962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	I1205 19:51:47.826365   32962 out.go:303] Setting JSON to false
	I1205 19:51:47.827409   32962 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2054,"bootTime":1701803854,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 19:51:47.827471   32962 start.go:138] virtualization:  
	I1205 19:51:47.830076   32962 out.go:177] * [functional-025502] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1205 19:51:47.835186   32962 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:51:47.837279   32962 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:51:47.835294   32962 notify.go:220] Checking for updates...
	I1205 19:51:47.841571   32962 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 19:51:47.843813   32962 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 19:51:47.845698   32962 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 19:51:47.847320   32962 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:51:47.849519   32962 config.go:182] Loaded profile config "functional-025502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:51:47.850168   32962 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:51:47.874741   32962 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:51:47.874864   32962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:51:47.967379   32962 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-05 19:51:47.95748395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:51:47.967475   32962 docker.go:295] overlay module found
	I1205 19:51:47.969662   32962 out.go:177] * Using the docker driver based on existing profile
	I1205 19:51:47.971595   32962 start.go:298] selected driver: docker
	I1205 19:51:47.971610   32962 start.go:902] validating driver "docker" against &{Name:functional-025502 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-025502 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:51:47.971733   32962 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:51:47.973973   32962 out.go:177] 
	W1205 19:51:47.975912   32962 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 19:51:47.977845   32962 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-025502 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-025502 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-025502 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (223.355874ms)

                                                
                                                
-- stdout --
	* [functional-025502] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:51:47.615323   32923 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:51:47.615475   32923 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:51:47.615484   32923 out.go:309] Setting ErrFile to fd 2...
	I1205 19:51:47.615490   32923 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:51:47.616215   32923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	I1205 19:51:47.616581   32923 out.go:303] Setting JSON to false
	I1205 19:51:47.617603   32923 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2054,"bootTime":1701803854,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 19:51:47.617676   32923 start.go:138] virtualization:  
	I1205 19:51:47.620217   32923 out.go:177] * [functional-025502] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1205 19:51:47.622614   32923 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:51:47.622767   32923 notify.go:220] Checking for updates...
	I1205 19:51:47.626961   32923 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:51:47.628865   32923 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 19:51:47.630965   32923 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 19:51:47.633453   32923 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 19:51:47.635790   32923 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:51:47.638102   32923 config.go:182] Loaded profile config "functional-025502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:51:47.638841   32923 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:51:47.662463   32923 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 19:51:47.662577   32923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 19:51:47.746325   32923 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-05 19:51:47.737292274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 19:51:47.746418   32923 docker.go:295] overlay module found
	I1205 19:51:47.752425   32923 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1205 19:51:47.754788   32923 start.go:298] selected driver: docker
	I1205 19:51:47.754804   32923 start.go:902] validating driver "docker" against &{Name:functional-025502 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-025502 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:51:47.754899   32923 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:51:47.757665   32923 out.go:177] 
	W1205 19:51:47.759660   32923 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 19:51:47.761472   32923 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-025502 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-025502 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-hhxzb" [8a98fb77-78a4-48ff-8ccd-794ec602cb78] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-hhxzb" [8a98fb77-78a4-48ff-8ccd-794ec602cb78] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.026832875s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30729
functional_test.go:1674: http://192.168.49.2:30729: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-hhxzb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30729
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5786c8a3-83ed-49be-bcce-a58f575fdb3c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.025502343s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-025502 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-025502 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-025502 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-025502 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3fbfc1ce-f8a6-4e63-a341-db177f43d355] Pending
helpers_test.go:344: "sp-pod" [3fbfc1ce-f8a6-4e63-a341-db177f43d355] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3fbfc1ce-f8a6-4e63-a341-db177f43d355] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.014741984s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-025502 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-025502 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-025502 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [33e3b4f3-8fb2-4ac5-9da9-44571ff81765] Pending
helpers_test.go:344: "sp-pod" [33e3b4f3-8fb2-4ac5-9da9-44571ff81765] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [33e3b4f3-8fb2-4ac5-9da9-44571ff81765] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.020178813s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-025502 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh -n functional-025502 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 cp functional-025502:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3889867496/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh -n functional-025502 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7773/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "sudo cat /etc/test/nested/copy/7773/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7773.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "sudo cat /etc/ssl/certs/7773.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7773.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "sudo cat /usr/share/ca-certificates/7773.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/77732.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "sudo cat /etc/ssl/certs/77732.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/77732.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "sudo cat /usr/share/ca-certificates/77732.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-025502 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-025502 ssh "sudo systemctl is-active docker": exit status 1 (446.243202ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-025502 ssh "sudo systemctl is-active containerd": exit status 1 (401.782019ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-025502 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-025502 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-025502 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-025502 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 30985: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-025502 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-025502 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8ae3d56d-6c07-4ec5-9517-be385d0ba28a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8ae3d56d-6c07-4ec5-9517-be385d0ba28a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.017642105s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-025502 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.13.254 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-025502 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-025502 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-025502 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-fhc7m" [169183d2-10d4-4992-ae67-859adb6b898c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-fhc7m" [169183d2-10d4-4992-ae67-859adb6b898c] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.041394567s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "332.851882ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "74.666354ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "340.731823ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "65.512547ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-025502 /tmp/TestFunctionalparallelMountCmdany-port1714710999/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701805901832989194" to /tmp/TestFunctionalparallelMountCmdany-port1714710999/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701805901832989194" to /tmp/TestFunctionalparallelMountCmdany-port1714710999/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701805901832989194" to /tmp/TestFunctionalparallelMountCmdany-port1714710999/001/test-1701805901832989194
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-025502 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (395.003298ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 19:51 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 19:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 19:51 test-1701805901832989194
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh cat /mount-9p/test-1701805901832989194
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-025502 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [12e2d346-8eeb-4876-bba2-67bef807600e] Pending
helpers_test.go:344: "busybox-mount" [12e2d346-8eeb-4876-bba2-67bef807600e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [12e2d346-8eeb-4876-bba2-67bef807600e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [12e2d346-8eeb-4876-bba2-67bef807600e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.014051867s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-025502 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-025502 /tmp/TestFunctionalparallelMountCmdany-port1714710999/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 service list -o json
functional_test.go:1493: Took "556.895843ms" to run "out/minikube-linux-arm64 -p functional-025502 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31726
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31726
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-025502 /tmp/TestFunctionalparallelMountCmdspecific-port1633578779/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-025502 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (551.766834ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-025502 /tmp/TestFunctionalparallelMountCmdspecific-port1633578779/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-025502 ssh "sudo umount -f /mount-9p": exit status 1 (475.296176ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-025502 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-025502 /tmp/TestFunctionalparallelMountCmdspecific-port1633578779/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-025502 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3514630327/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-025502 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3514630327/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-025502 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3514630327/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-025502 ssh "findmnt -T" /mount1: (1.333227301s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-025502 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-025502 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3514630327/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-025502 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3514630327/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-025502 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3514630327/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-025502 version -o=json --components: (1.002140285s)
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-025502 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-025502
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-025502 image ls --format short --alsologtostderr:
I1205 19:52:16.514517   35429 out.go:296] Setting OutFile to fd 1 ...
I1205 19:52:16.514850   35429 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:52:16.514858   35429 out.go:309] Setting ErrFile to fd 2...
I1205 19:52:16.514864   35429 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:52:16.515124   35429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
I1205 19:52:16.515767   35429 config.go:182] Loaded profile config "functional-025502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:52:16.515886   35429 config.go:182] Loaded profile config "functional-025502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:52:16.516400   35429 cli_runner.go:164] Run: docker container inspect functional-025502 --format={{.State.Status}}
I1205 19:52:16.536655   35429 ssh_runner.go:195] Run: systemctl --version
I1205 19:52:16.536705   35429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-025502
I1205 19:52:16.569763   35429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/functional-025502/id_rsa Username:docker}
I1205 19:52:16.677525   35429 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-025502 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| gcr.io/google-containers/addon-resizer  | functional-025502  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | f09fc93534f6a | 45.3MB |
| docker.io/library/nginx                 | latest             | 5628e5ea3c17f | 196MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-025502 image ls --format table --alsologtostderr:
I1205 19:52:17.135139   35561 out.go:296] Setting OutFile to fd 1 ...
I1205 19:52:17.135347   35561 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:52:17.135386   35561 out.go:309] Setting ErrFile to fd 2...
I1205 19:52:17.135408   35561 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:52:17.135675   35561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
I1205 19:52:17.136421   35561 config.go:182] Loaded profile config "functional-025502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:52:17.136664   35561 config.go:182] Loaded profile config "functional-025502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:52:17.137544   35561 cli_runner.go:164] Run: docker container inspect functional-025502 --format={{.State.Status}}
I1205 19:52:17.163723   35561 ssh_runner.go:195] Run: systemctl --version
I1205 19:52:17.163792   35561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-025502
I1205 19:52:17.196550   35561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/functional-025502/id_rsa Username:docker}
I1205 19:52:17.302028   35561 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-025502 image ls --format json --alsologtostderr:
[{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8","repoDigests":["docker.io/library/nginx@sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7","docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45281593"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51ba
a7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-025502"],"size":"34114467"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"3ca3
ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19
768dca8fd3","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab"],"repoTags":["docker.io/library/nginx:latest"],"size":"196211465"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry
.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5
e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf
7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-025502 image ls --format json --alsologtostderr:
I1205 19:52:16.843725   35491 out.go:296] Setting OutFile to fd 1 ...
I1205 19:52:16.843967   35491 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:52:16.843978   35491 out.go:309] Setting ErrFile to fd 2...
I1205 19:52:16.843985   35491 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:52:16.844234   35491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
I1205 19:52:16.844875   35491 config.go:182] Loaded profile config "functional-025502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:52:16.845096   35491 config.go:182] Loaded profile config "functional-025502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:52:16.845596   35491 cli_runner.go:164] Run: docker container inspect functional-025502 --format={{.State.Status}}
I1205 19:52:16.869717   35491 ssh_runner.go:195] Run: systemctl --version
I1205 19:52:16.869775   35491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-025502
I1205 19:52:16.888874   35491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/functional-025502/id_rsa Username:docker}
I1205 19:52:16.993926   35491 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-025502 image ls --format yaml --alsologtostderr:
- id: f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8
repoDigests:
- docker.io/library/nginx@sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7
- docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc
repoTags:
- docker.io/library/nginx:alpine
size: "45281593"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-025502
size: "34114467"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab
repoTags:
- docker.io/library/nginx:latest
size: "196211465"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-025502 image ls --format yaml --alsologtostderr:
I1205 19:52:16.503820   35430 out.go:296] Setting OutFile to fd 1 ...
I1205 19:52:16.504035   35430 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:52:16.504046   35430 out.go:309] Setting ErrFile to fd 2...
I1205 19:52:16.504052   35430 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:52:16.504396   35430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
I1205 19:52:16.505045   35430 config.go:182] Loaded profile config "functional-025502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:52:16.505216   35430 config.go:182] Loaded profile config "functional-025502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:52:16.505733   35430 cli_runner.go:164] Run: docker container inspect functional-025502 --format={{.State.Status}}
I1205 19:52:16.531011   35430 ssh_runner.go:195] Run: systemctl --version
I1205 19:52:16.531063   35430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-025502
I1205 19:52:16.551333   35430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/functional-025502/id_rsa Username:docker}
I1205 19:52:16.653471   35430 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-025502 ssh pgrep buildkitd: exit status 1 (393.881538ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image build -t localhost/my-image:functional-025502 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-025502 image build -t localhost/my-image:functional-025502 testdata/build --alsologtostderr: (2.823116323s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-025502 image build -t localhost/my-image:functional-025502 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f101acd233a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-025502
--> 4079c624ec4
Successfully tagged localhost/my-image:functional-025502
4079c624ec4f61960ded76bd36e3ffac32f536b84c95ae81a4a5a97753c53e66
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-025502 image build -t localhost/my-image:functional-025502 testdata/build --alsologtostderr:
I1205 19:52:17.213953   35568 out.go:296] Setting OutFile to fd 1 ...
I1205 19:52:17.214182   35568 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:52:17.215734   35568 out.go:309] Setting ErrFile to fd 2...
I1205 19:52:17.215818   35568 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:52:17.216209   35568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
I1205 19:52:17.217276   35568 config.go:182] Loaded profile config "functional-025502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:52:17.219006   35568 config.go:182] Loaded profile config "functional-025502": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:52:17.219517   35568 cli_runner.go:164] Run: docker container inspect functional-025502 --format={{.State.Status}}
I1205 19:52:17.247252   35568 ssh_runner.go:195] Run: systemctl --version
I1205 19:52:17.247303   35568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-025502
I1205 19:52:17.269306   35568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/functional-025502/id_rsa Username:docker}
I1205 19:52:17.373067   35568 build_images.go:151] Building image from path: /tmp/build.3893399948.tar
I1205 19:52:17.373198   35568 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 19:52:17.382793   35568 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3893399948.tar
I1205 19:52:17.386830   35568 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3893399948.tar: stat -c "%s %y" /var/lib/minikube/build/build.3893399948.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3893399948.tar': No such file or directory
I1205 19:52:17.386854   35568 ssh_runner.go:362] scp /tmp/build.3893399948.tar --> /var/lib/minikube/build/build.3893399948.tar (3072 bytes)
I1205 19:52:17.417472   35568 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3893399948
I1205 19:52:17.427605   35568 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3893399948 -xf /var/lib/minikube/build/build.3893399948.tar
I1205 19:52:17.438092   35568 crio.go:297] Building image: /var/lib/minikube/build/build.3893399948
I1205 19:52:17.438155   35568 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-025502 /var/lib/minikube/build/build.3893399948 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1205 19:52:19.898182   35568 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-025502 /var/lib/minikube/build/build.3893399948 --cgroup-manager=cgroupfs: (2.46000045s)
I1205 19:52:19.898258   35568 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3893399948
I1205 19:52:19.908025   35568 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3893399948.tar
I1205 19:52:19.917485   35568 build_images.go:207] Built localhost/my-image:functional-025502 from /tmp/build.3893399948.tar
I1205 19:52:19.917512   35568 build_images.go:123] succeeded building to: functional-025502
I1205 19:52:19.917517   35568 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2023/12/05 19:51:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.511173566s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-025502
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image load --daemon gcr.io/google-containers/addon-resizer:functional-025502 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-025502 image load --daemon gcr.io/google-containers/addon-resizer:functional-025502 --alsologtostderr: (4.609402271s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image load --daemon gcr.io/google-containers/addon-resizer:functional-025502 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-025502 image load --daemon gcr.io/google-containers/addon-resizer:functional-025502 --alsologtostderr: (2.591968171s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.084466252s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-025502
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image load --daemon gcr.io/google-containers/addon-resizer:functional-025502 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-025502 image load --daemon gcr.io/google-containers/addon-resizer:functional-025502 --alsologtostderr: (3.518082695s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image save gcr.io/google-containers/addon-resizer:functional-025502 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image rm gcr.io/google-containers/addon-resizer:functional-025502 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-025502
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-025502 image save --daemon gcr.io/google-containers/addon-resizer:functional-025502 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-025502
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-025502
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-025502
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-025502
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (99.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-867324 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1205 19:53:21.043085    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:53:21.049143    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:53:21.059360    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:53:21.079577    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:53:21.119804    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:53:21.200059    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:53:21.360398    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:53:21.680900    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:53:22.321161    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:53:23.602253    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:53:26.162474    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:53:31.283460    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:53:41.524458    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:54:02.005382    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-867324 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m39.056031017s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (99.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.05s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-867324 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-867324 addons enable ingress --alsologtostderr -v=5: (12.052467713s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-867324 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.5s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-074561 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1205 19:57:37.441058    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:58:21.040341    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-074561 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m17.49030883s)
--- PASS: TestJSONOutput/start/Command (77.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-074561 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-074561 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-074561 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-074561 --output=json --user=testUser: (5.861373997s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-240468 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-240468 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (105.228551ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0a0418c0-066c-4e89-9d25-f92803dcb2ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-240468] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa59dff6-0d4f-4f7d-b51d-1ef2a5615c5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17731"}}
	{"specversion":"1.0","id":"4187d0f6-6a84-4fbb-a041-b1595c889298","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"31e9a2e9-d61b-4976-a439-a6f1cba727c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig"}}
	{"specversion":"1.0","id":"ebb87f6a-8e12-482a-83b5-6dde98befc6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube"}}
	{"specversion":"1.0","id":"c2e0914d-c527-4181-beba-a9be2a114fc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"97aee625-15bb-468d-8fa6-13174016b749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"10f74450-cff6-42f8-ac48-a2981ce24632","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-240468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-240468
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (47.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-732589 --network=
E1205 19:58:48.726632    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 19:58:59.364789    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 19:59:14.991738    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 19:59:14.997586    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 19:59:15.007840    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 19:59:15.028088    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 19:59:15.068344    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 19:59:15.148611    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 19:59:15.308946    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 19:59:15.629815    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 19:59:16.270630    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 19:59:17.551076    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 19:59:20.112599    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 19:59:25.232948    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-732589 --network=: (45.437335285s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-732589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-732589
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-732589: (2.060201015s)
--- PASS: TestKicCustomNetwork/create_custom_network (47.52s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-871906 --network=bridge
E1205 19:59:35.473703    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 19:59:55.954635    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-871906 --network=bridge: (35.025087799s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-871906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-871906
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-871906: (1.99786966s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.04s)

                                                
                                    
x
+
TestKicExistingNetwork (37.03s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-055946 --network=existing-network
E1205 20:00:36.915876    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-055946 --network=existing-network: (34.838596187s)
helpers_test.go:175: Cleaning up "existing-network-055946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-055946
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-055946: (1.996088548s)
--- PASS: TestKicExistingNetwork (37.03s)

                                                
                                    
x
+
TestKicCustomSubnet (33.81s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-204552 --subnet=192.168.60.0/24
E1205 20:01:15.519991    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-204552 --subnet=192.168.60.0/24: (31.700197386s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-204552 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-204552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-204552
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-204552: (2.08075136s)
--- PASS: TestKicCustomSubnet (33.81s)

                                                
                                    
x
+
TestKicStaticIP (33.99s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-267961 --static-ip=192.168.200.200
E1205 20:01:43.204975    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-267961 --static-ip=192.168.200.200: (31.781539422s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-267961 ip
helpers_test.go:175: Cleaning up "static-ip-267961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-267961
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-267961: (2.029772326s)
--- PASS: TestKicStaticIP (33.99s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-915778 --driver=docker  --container-runtime=crio
E1205 20:01:58.836084    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-915778 --driver=docker  --container-runtime=crio: (30.502321519s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-918543 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-918543 --driver=docker  --container-runtime=crio: (32.374097401s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-915778
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-918543
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-918543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-918543
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-918543: (2.125511839s)
helpers_test.go:175: Cleaning up "first-915778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-915778
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-915778: (2.34851627s)
--- PASS: TestMinikubeProfile (68.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-342253 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-342253 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.584024932s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-342253 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-344042 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-344042 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.946186377s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-344042 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-342253 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-342253 --alsologtostderr -v=5: (1.649707204s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-344042 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-344042
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-344042: (1.217345253s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.9s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-344042
E1205 20:03:21.040319    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-344042: (6.895007883s)
--- PASS: TestMountStart/serial/RestartStopped (7.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-344042 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (123.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-930892 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1205 20:04:14.991853    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 20:04:42.676943    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-930892 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m3.405785592s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (123.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-930892 -- rollout status deployment/busybox: (4.55908703s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- exec busybox-5bc68d56bd-ctbfn -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- exec busybox-5bc68d56bd-gg5q2 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- exec busybox-5bc68d56bd-ctbfn -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- exec busybox-5bc68d56bd-gg5q2 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- exec busybox-5bc68d56bd-ctbfn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-930892 -- exec busybox-5bc68d56bd-gg5q2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-930892 -v 3 --alsologtostderr
E1205 20:06:15.520040    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-930892 -v 3 --alsologtostderr: (47.649308503s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-930892 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 cp testdata/cp-test.txt multinode-930892:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 cp multinode-930892:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile220806999/001/cp-test_multinode-930892.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 cp multinode-930892:/home/docker/cp-test.txt multinode-930892-m02:/home/docker/cp-test_multinode-930892_multinode-930892-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892-m02 "sudo cat /home/docker/cp-test_multinode-930892_multinode-930892-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 cp multinode-930892:/home/docker/cp-test.txt multinode-930892-m03:/home/docker/cp-test_multinode-930892_multinode-930892-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892-m03 "sudo cat /home/docker/cp-test_multinode-930892_multinode-930892-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 cp testdata/cp-test.txt multinode-930892-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 cp multinode-930892-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile220806999/001/cp-test_multinode-930892-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 cp multinode-930892-m02:/home/docker/cp-test.txt multinode-930892:/home/docker/cp-test_multinode-930892-m02_multinode-930892.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892 "sudo cat /home/docker/cp-test_multinode-930892-m02_multinode-930892.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 cp multinode-930892-m02:/home/docker/cp-test.txt multinode-930892-m03:/home/docker/cp-test_multinode-930892-m02_multinode-930892-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892-m03 "sudo cat /home/docker/cp-test_multinode-930892-m02_multinode-930892-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 cp testdata/cp-test.txt multinode-930892-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 cp multinode-930892-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile220806999/001/cp-test_multinode-930892-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 cp multinode-930892-m03:/home/docker/cp-test.txt multinode-930892:/home/docker/cp-test_multinode-930892-m03_multinode-930892.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892 "sudo cat /home/docker/cp-test_multinode-930892-m03_multinode-930892.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 cp multinode-930892-m03:/home/docker/cp-test.txt multinode-930892-m02:/home/docker/cp-test_multinode-930892-m03_multinode-930892-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 ssh -n multinode-930892-m02 "sudo cat /home/docker/cp-test_multinode-930892-m03_multinode-930892-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-930892 node stop m03: (1.253344511s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-930892 status: exit status 7 (552.179877ms)

                                                
                                                
-- stdout --
	multinode-930892
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-930892-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-930892-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-930892 status --alsologtostderr: exit status 7 (547.866689ms)

                                                
                                                
-- stdout --
	multinode-930892
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-930892-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-930892-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:06:47.581546   82538 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:06:47.581686   82538 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:06:47.581697   82538 out.go:309] Setting ErrFile to fd 2...
	I1205 20:06:47.581703   82538 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:06:47.582059   82538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	I1205 20:06:47.582240   82538 out.go:303] Setting JSON to false
	I1205 20:06:47.582288   82538 mustload.go:65] Loading cluster: multinode-930892
	I1205 20:06:47.582412   82538 notify.go:220] Checking for updates...
	I1205 20:06:47.582702   82538 config.go:182] Loaded profile config "multinode-930892": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:06:47.582719   82538 status.go:255] checking status of multinode-930892 ...
	I1205 20:06:47.583371   82538 cli_runner.go:164] Run: docker container inspect multinode-930892 --format={{.State.Status}}
	I1205 20:06:47.602653   82538 status.go:330] multinode-930892 host status = "Running" (err=<nil>)
	I1205 20:06:47.602697   82538 host.go:66] Checking if "multinode-930892" exists ...
	I1205 20:06:47.602979   82538 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-930892
	I1205 20:06:47.620901   82538 host.go:66] Checking if "multinode-930892" exists ...
	I1205 20:06:47.621216   82538 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:06:47.621259   82538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892
	I1205 20:06:47.650921   82538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892/id_rsa Username:docker}
	I1205 20:06:47.749920   82538 ssh_runner.go:195] Run: systemctl --version
	I1205 20:06:47.754942   82538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:06:47.767343   82538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:06:47.832362   82538 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-12-05 20:06:47.822844933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 20:06:47.832972   82538 kubeconfig.go:92] found "multinode-930892" server: "https://192.168.58.2:8443"
	I1205 20:06:47.832994   82538 api_server.go:166] Checking apiserver status ...
	I1205 20:06:47.833035   82538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:06:47.844904   82538 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1283/cgroup
	I1205 20:06:47.854877   82538 api_server.go:182] apiserver freezer: "12:freezer:/docker/d5e6ffca9b1cdea5ea6e00c49ce2f376d8a49697a136a6f3830a6acb7f8f8841/crio/crio-dc2686f4ac2e0467ecb372b012a95c933bd2b39d70cb2eaae196e5cd79e4c0d5"
	I1205 20:06:47.854937   82538 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d5e6ffca9b1cdea5ea6e00c49ce2f376d8a49697a136a6f3830a6acb7f8f8841/crio/crio-dc2686f4ac2e0467ecb372b012a95c933bd2b39d70cb2eaae196e5cd79e4c0d5/freezer.state
	I1205 20:06:47.864386   82538 api_server.go:204] freezer state: "THAWED"
	I1205 20:06:47.864420   82538 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1205 20:06:47.872934   82538 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1205 20:06:47.872959   82538 status.go:421] multinode-930892 apiserver status = Running (err=<nil>)
	I1205 20:06:47.872969   82538 status.go:257] multinode-930892 status: &{Name:multinode-930892 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:06:47.872989   82538 status.go:255] checking status of multinode-930892-m02 ...
	I1205 20:06:47.873276   82538 cli_runner.go:164] Run: docker container inspect multinode-930892-m02 --format={{.State.Status}}
	I1205 20:06:47.890551   82538 status.go:330] multinode-930892-m02 host status = "Running" (err=<nil>)
	I1205 20:06:47.890572   82538 host.go:66] Checking if "multinode-930892-m02" exists ...
	I1205 20:06:47.890851   82538 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-930892-m02
	I1205 20:06:47.907396   82538 host.go:66] Checking if "multinode-930892-m02" exists ...
	I1205 20:06:47.907684   82538 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:06:47.907728   82538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-930892-m02
	I1205 20:06:47.924672   82538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17731-2478/.minikube/machines/multinode-930892-m02/id_rsa Username:docker}
	I1205 20:06:48.026321   82538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:06:48.040246   82538 status.go:257] multinode-930892-m02 status: &{Name:multinode-930892-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:06:48.040283   82538 status.go:255] checking status of multinode-930892-m03 ...
	I1205 20:06:48.040587   82538 cli_runner.go:164] Run: docker container inspect multinode-930892-m03 --format={{.State.Status}}
	I1205 20:06:48.058127   82538 status.go:330] multinode-930892-m03 host status = "Stopped" (err=<nil>)
	I1205 20:06:48.058145   82538 status.go:343] host is not running, skipping remaining checks
	I1205 20:06:48.058153   82538 status.go:257] multinode-930892-m03 status: &{Name:multinode-930892-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-930892 node start m03 --alsologtostderr: (11.695665608s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (123.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-930892
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-930892
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-930892: (24.999290933s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-930892 --wait=true -v=8 --alsologtostderr
E1205 20:08:21.040440    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-930892 --wait=true -v=8 --alsologtostderr: (1m38.674158661s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-930892
--- PASS: TestMultiNode/serial/RestartKeepsNodes (123.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-930892 node delete m03: (4.562756056s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 stop
E1205 20:09:14.991799    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-930892 stop: (23.793389718s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-930892 status: exit status 7 (108.878319ms)

                                                
                                                
-- stdout --
	multinode-930892
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-930892-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-930892 status --alsologtostderr: exit status 7 (109.759533ms)

                                                
                                                
-- stdout --
	multinode-930892
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-930892-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:09:33.718865   90869 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:09:33.719022   90869 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:09:33.719049   90869 out.go:309] Setting ErrFile to fd 2...
	I1205 20:09:33.719070   90869 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:09:33.719331   90869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	I1205 20:09:33.719520   90869 out.go:303] Setting JSON to false
	I1205 20:09:33.719577   90869 mustload.go:65] Loading cluster: multinode-930892
	I1205 20:09:33.719680   90869 notify.go:220] Checking for updates...
	I1205 20:09:33.720020   90869 config.go:182] Loaded profile config "multinode-930892": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:09:33.720033   90869 status.go:255] checking status of multinode-930892 ...
	I1205 20:09:33.720635   90869 cli_runner.go:164] Run: docker container inspect multinode-930892 --format={{.State.Status}}
	I1205 20:09:33.741280   90869 status.go:330] multinode-930892 host status = "Stopped" (err=<nil>)
	I1205 20:09:33.741312   90869 status.go:343] host is not running, skipping remaining checks
	I1205 20:09:33.741319   90869 status.go:257] multinode-930892 status: &{Name:multinode-930892 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:09:33.741346   90869 status.go:255] checking status of multinode-930892-m02 ...
	I1205 20:09:33.741664   90869 cli_runner.go:164] Run: docker container inspect multinode-930892-m02 --format={{.State.Status}}
	I1205 20:09:33.762036   90869 status.go:330] multinode-930892-m02 host status = "Stopped" (err=<nil>)
	I1205 20:09:33.762055   90869 status.go:343] host is not running, skipping remaining checks
	I1205 20:09:33.762062   90869 status.go:257] multinode-930892-m02 status: &{Name:multinode-930892-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-930892 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1205 20:09:44.087409    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-930892 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m21.034135757s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-930892 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.80s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-930892
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-930892-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-930892-m02 --driver=docker  --container-runtime=crio: exit status 14 (101.622189ms)

                                                
                                                
-- stdout --
	* [multinode-930892-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-930892-m02' is duplicated with machine name 'multinode-930892-m02' in profile 'multinode-930892'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-930892-m03 --driver=docker  --container-runtime=crio
E1205 20:11:15.519939    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-930892-m03 --driver=docker  --container-runtime=crio: (29.801240643s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-930892
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-930892: exit status 80 (348.101169ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-930892
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-930892-m03 already exists in multinode-930892-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-930892-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-930892-m03: (2.067375831s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.39s)

                                                
                                    
x
+
TestPreload (173.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-994359 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1205 20:12:38.565416    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-994359 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m22.393217198s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-994359 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-994359 image pull gcr.io/k8s-minikube/busybox: (1.908378958s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-994359
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-994359: (5.80188313s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-994359 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1205 20:13:21.040204    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 20:14:14.992374    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-994359 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m20.652641016s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-994359 image list
helpers_test.go:175: Cleaning up "test-preload-994359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-994359
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-994359: (2.337920476s)
--- PASS: TestPreload (173.36s)

                                                
                                    
x
+
TestScheduledStopUnix (107.74s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-494299 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-494299 --memory=2048 --driver=docker  --container-runtime=crio: (30.539234871s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-494299 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-494299 -n scheduled-stop-494299
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-494299 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-494299 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-494299 -n scheduled-stop-494299
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-494299
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-494299 --schedule 15s
E1205 20:15:38.037249    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-494299
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-494299: exit status 7 (79.642708ms)

                                                
                                                
-- stdout --
	scheduled-stop-494299
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-494299 -n scheduled-stop-494299
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-494299 -n scheduled-stop-494299: exit status 7 (87.085894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-494299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-494299
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-494299: (5.460078855s)
--- PASS: TestScheduledStopUnix (107.74s)

                                                
                                    
x
+
TestInsufficientStorage (11.38s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-352012 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E1205 20:16:15.519564    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-352012 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.790933827s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7d095e6c-ee52-4035-bde4-188ddc8b4fc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-352012] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"58b6526f-4117-4625-84a3-657e6641c557","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17731"}}
	{"specversion":"1.0","id":"c4ee3f4c-f8fd-4959-bd3a-30191222b279","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7b88d6c3-0163-41bf-a94a-1547e4718659","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig"}}
	{"specversion":"1.0","id":"bb822771-6f9f-4b28-89cf-7ebc44440eb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube"}}
	{"specversion":"1.0","id":"ddf9aba7-c03f-402f-92be-b61855f7f49e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c3f91631-9fe9-49c5-b99f-107ac1d986ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0933b2fd-4878-47f1-9d35-15b9c7b48e59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c6e35362-4ed3-49cd-b465-2ca273aa55d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b89f41e0-f07b-45e7-99d1-c245839efd75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4eee96e-f136-4903-9dd7-bfbddfce0cec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b320dcc3-6a80-46e4-80c9-8ff14ea27f68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-352012 in cluster insufficient-storage-352012","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"70702f5c-f6a0-425c-a4a4-3d860df05e14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"af8cf3db-72d2-41d2-aab1-7991d952dc74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0d83662-c8d2-4e67-8042-457614081713","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-352012 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-352012 --output=json --layout=cluster: exit status 7 (346.990071ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-352012","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-352012","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:16:24.609382  107490 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-352012" does not appear in /home/jenkins/minikube-integration/17731-2478/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-352012 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-352012 --output=json --layout=cluster: exit status 7 (324.240652ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-352012","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-352012","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:16:24.935256  107544 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-352012" does not appear in /home/jenkins/minikube-integration/17731-2478/kubeconfig
	E1205 20:16:24.946510  107544 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/insufficient-storage-352012/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-352012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-352012
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-352012: (1.921602136s)
--- PASS: TestInsufficientStorage (11.38s)

                                                
                                    
x
+
TestKubernetesUpgrade (392.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-553257 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1205 20:18:21.045941    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-553257 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.584499543s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-553257
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-553257: (1.346456382s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-553257 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-553257 status --format={{.Host}}: exit status 7 (87.093411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-553257 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-553257 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m46.839061492s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-553257 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-553257 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-553257 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (91.130069ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-553257] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-553257
	    minikube start -p kubernetes-upgrade-553257 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5532572 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-553257 --kubernetes-version=v1.29.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-553257 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-553257 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.354799608s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-553257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-553257
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-553257: (2.709776623s)
--- PASS: TestKubernetesUpgrade (392.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-249031 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-249031 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (101.7903ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-249031] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-249031 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-249031 --driver=docker  --container-runtime=crio: (38.435790317s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-249031 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-249031 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-249031 --no-kubernetes --driver=docker  --container-runtime=crio: (13.413009524s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-249031 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-249031 status -o json: exit status 2 (470.726338ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-249031","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-249031
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-249031: (2.549562882s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-249031 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-249031 --no-kubernetes --driver=docker  --container-runtime=crio: (7.647237741s)
--- PASS: TestNoKubernetes/serial/Start (7.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-249031 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-249031 "sudo systemctl is-active --quiet service kubelet": exit status 1 (398.890372ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-249031
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-249031: (1.29072268s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-249031 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-249031 --driver=docker  --container-runtime=crio: (7.348698119s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-249031 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-249031 "sudo systemctl is-active --quiet service kubelet": exit status 1 (293.711463ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-180271
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestPause/serial/Start (56.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-503740 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-503740 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (56.074749818s)
--- PASS: TestPause/serial/Start (56.07s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-503740 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1205 20:23:21.040029    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-503740 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.465281465s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.50s)

                                                
                                    
x
+
TestPause/serial/Pause (1.1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-503740 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-503740 --alsologtostderr -v=5: (1.097361677s)
--- PASS: TestPause/serial/Pause (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-503740 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-503740 --output=json --layout=cluster: exit status 2 (459.805458ms)

                                                
                                                
-- stdout --
	{"Name":"pause-503740","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-503740","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.46s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-503740 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.73s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.37s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-503740 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-503740 --alsologtostderr -v=5: (1.369548827s)
--- PASS: TestPause/serial/PauseAgain (1.37s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.31s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-503740 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-503740 --alsologtostderr -v=5: (3.312206134s)
--- PASS: TestPause/serial/DeletePaused (3.31s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-503740
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-503740: exit status 1 (30.279852ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-503740: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-900155 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-900155 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (205.431255ms)

                                                
                                                
-- stdout --
	* [false-900155] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:24:19.993442  146488 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:24:19.993598  146488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:24:19.993608  146488 out.go:309] Setting ErrFile to fd 2...
	I1205 20:24:19.993614  146488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:24:19.993844  146488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-2478/.minikube/bin
	I1205 20:24:19.994221  146488 out.go:303] Setting JSON to false
	I1205 20:24:19.995167  146488 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4006,"bootTime":1701803854,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1205 20:24:19.995234  146488 start.go:138] virtualization:  
	I1205 20:24:19.998486  146488 out.go:177] * [false-900155] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1205 20:24:20.001361  146488 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:24:20.003320  146488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:24:20.001469  146488 notify.go:220] Checking for updates...
	I1205 20:24:20.005601  146488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-2478/kubeconfig
	I1205 20:24:20.007874  146488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-2478/.minikube
	I1205 20:24:20.009958  146488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 20:24:20.011646  146488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:24:20.014016  146488 config.go:182] Loaded profile config "force-systemd-flag-415679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:24:20.014123  146488 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:24:20.038380  146488 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1205 20:24:20.038499  146488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 20:24:20.123603  146488 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-05 20:24:20.113795736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1205 20:24:20.123712  146488 docker.go:295] overlay module found
	I1205 20:24:20.127175  146488 out.go:177] * Using the docker driver based on user configuration
	I1205 20:24:20.128941  146488 start.go:298] selected driver: docker
	I1205 20:24:20.128964  146488 start.go:902] validating driver "docker" against <nil>
	I1205 20:24:20.128978  146488 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:24:20.131404  146488 out.go:177] 
	W1205 20:24:20.133451  146488 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1205 20:24:20.135634  146488 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-900155 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-900155

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-900155

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-900155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-900155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-900155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-900155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-900155

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-900155

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-900155

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-900155

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-900155

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-900155" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-900155" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-900155

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900155"

                                                
                                                
----------------------- debugLogs end: false-900155 [took: 4.952451626s] --------------------------------
helpers_test.go:175: Cleaning up "false-900155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-900155
--- PASS: TestNetworkPlugins/group/false (5.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (125.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-683987 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1205 20:26:15.519960    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 20:26:24.088538    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-683987 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m5.676819389s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (125.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-683987 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [220a3b58-32ae-4d7a-90aa-590e8da8e454] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [220a3b58-32ae-4d7a-90aa-590e8da8e454] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.02607826s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-683987 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-683987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-683987 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-683987 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-683987 --alsologtostderr -v=3: (12.127988105s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-683987 -n old-k8s-version-683987
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-683987 -n old-k8s-version-683987: exit status 7 (130.941628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-683987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (444.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-683987 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1205 20:28:21.039884    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-683987 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m24.086015985s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-683987 -n old-k8s-version-683987
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (444.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-942793 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1205 20:29:14.991575    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 20:29:18.566090    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-942793 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m26.732199949s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-942793 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b30493aa-fe3b-4a1e-9c7c-1e7c859bab0d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b30493aa-fe3b-4a1e-9c7c-1e7c859bab0d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.025753125s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-942793 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-942793 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-942793 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.023733423s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-942793 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-942793 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-942793 --alsologtostderr -v=3: (12.038568162s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-942793 -n default-k8s-diff-port-942793
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-942793 -n default-k8s-diff-port-942793: exit status 7 (87.2109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-942793 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-942793 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1205 20:31:15.519579    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 20:32:18.037439    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 20:33:21.039918    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 20:34:14.991836    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-942793 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m54.069036996s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-942793 -n default-k8s-diff-port-942793
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-rkhtf" [90d7f644-8581-4715-aa6f-8f65d2297799] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.027904428s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-rkhtf" [90d7f644-8581-4715-aa6f-8f65d2297799] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009056268s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-683987 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-683987 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-683987 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-683987 --alsologtostderr -v=1: (1.04558257s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-683987 -n old-k8s-version-683987
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-683987 -n old-k8s-version-683987: exit status 2 (416.698726ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-683987 -n old-k8s-version-683987
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-683987 -n old-k8s-version-683987: exit status 2 (433.681358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-683987 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-683987 --alsologtostderr -v=1: (1.003511852s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-683987 -n old-k8s-version-683987
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-683987 -n old-k8s-version-683987
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-651148 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1205 20:36:15.520359    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-651148 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m23.355088013s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mcw27" [bddae486-50be-46b7-979c-364c56a4d23b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mcw27" [bddae486-50be-46b7-979c-364c56a4d23b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.048995628s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mcw27" [bddae486-50be-46b7-979c-364c56a4d23b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01078775s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-942793 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-942793 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-942793 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-942793 -n default-k8s-diff-port-942793
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-942793 -n default-k8s-diff-port-942793: exit status 2 (398.173189ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-942793 -n default-k8s-diff-port-942793
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-942793 -n default-k8s-diff-port-942793: exit status 2 (361.547822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-942793 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-942793 -n default-k8s-diff-port-942793
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-942793 -n default-k8s-diff-port-942793
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-446085 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-446085 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (1m4.543874559s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-651148 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [974be063-4938-4d8c-8b23-650b0680f40d] Pending
helpers_test.go:344: "busybox" [974be063-4938-4d8c-8b23-650b0680f40d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [974be063-4938-4d8c-8b23-650b0680f40d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.038123582s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-651148 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-651148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-651148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.706147824s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-651148 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-651148 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-651148 --alsologtostderr -v=3: (12.357187911s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-651148 -n embed-certs-651148
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-651148 -n embed-certs-651148: exit status 7 (88.497949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-651148 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (350.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-651148 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1205 20:37:44.945483    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:37:44.950704    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:37:44.960940    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:37:44.981146    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:37:45.021447    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:37:45.101689    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:37:45.262552    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:37:45.583030    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:37:46.223890    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:37:47.504389    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:37:50.065268    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:37:55.186085    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-651148 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m49.544966059s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-651148 -n embed-certs-651148
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (350.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-446085 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fe5ae916-4c76-4f6f-86d0-546c06ab0a17] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fe5ae916-4c76-4f6f-86d0-546c06ab0a17] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.028075124s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-446085 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-446085 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1205 20:38:05.426671    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-446085 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3062985s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-446085 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-446085 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-446085 --alsologtostderr -v=3: (12.270201369s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-446085 -n no-preload-446085
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-446085 -n no-preload-446085: exit status 7 (83.358061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-446085 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (360.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-446085 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1205 20:38:21.040594    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 20:38:25.907125    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:39:06.867334    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:39:14.991850    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
E1205 20:40:03.208557    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:03.213789    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:03.223997    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:03.244245    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:03.284491    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:03.364652    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:03.525062    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:03.845867    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:04.486728    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:05.767486    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:08.328259    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:13.449363    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:23.689575    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:40:28.787541    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:40:44.169829    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:41:15.520361    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 20:41:25.130448    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:42:44.945319    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:42:47.051574    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
E1205 20:43:04.088740    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
E1205 20:43:12.628379    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
E1205 20:43:21.040281    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/addons-753790/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-446085 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (5m59.861235s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-446085 -n no-preload-446085
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (360.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mqk8x" [c11c090f-97bf-4fba-9591-7de5a1422d1e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mqk8x" [c11c090f-97bf-4fba-9591-7de5a1422d1e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.033828731s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mqk8x" [c11c090f-97bf-4fba-9591-7de5a1422d1e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011163736s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-651148 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-651148 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-651148 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-651148 --alsologtostderr -v=1: (1.29670788s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-651148 -n embed-certs-651148
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-651148 -n embed-certs-651148: exit status 2 (549.993524ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-651148 -n embed-certs-651148
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-651148 -n embed-certs-651148: exit status 2 (444.750889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-651148 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-651148 --alsologtostderr -v=1: (1.056792242s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-651148 -n embed-certs-651148
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-651148 -n embed-certs-651148
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-304629 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1205 20:44:14.992218    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-304629 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (51.125333465s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kns9f" [73665995-71b1-4b54-a0f8-065017b2c1a6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kns9f" [73665995-71b1-4b54-a0f8-065017b2c1a6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.030041178s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kns9f" [73665995-71b1-4b54-a0f8-065017b2c1a6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009912992s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-446085 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-446085 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-446085 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-446085 -n no-preload-446085
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-446085 -n no-preload-446085: exit status 2 (380.249172ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-446085 -n no-preload-446085
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-446085 -n no-preload-446085: exit status 2 (390.962579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-446085 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-446085 -n no-preload-446085
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-446085 -n no-preload-446085
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m24.985127482s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-304629 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-304629 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.65993124s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-304629 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-304629 --alsologtostderr -v=3: (1.437786008s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-304629 -n newest-cni-304629
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-304629 -n newest-cni-304629: exit status 7 (169.581309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-304629 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-304629 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1205 20:45:03.209422    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-304629 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (33.454509363s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-304629 -n newest-cni-304629
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-304629 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-304629 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-304629 -n newest-cni-304629
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-304629 -n newest-cni-304629: exit status 2 (390.861351ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-304629 -n newest-cni-304629
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-304629 -n newest-cni-304629: exit status 2 (388.377694ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-304629 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-304629 -n newest-cni-304629
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-304629 -n newest-cni-304629
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.41s)
E1205 20:51:14.055860    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/auto-900155/client.crt: no such file or directory
E1205 20:51:15.519704    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
E1205 20:51:19.176620    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/auto-900155/client.crt: no such file or directory
E1205 20:51:29.417024    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/auto-900155/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1205 20:45:58.566600    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m17.329645613s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-900155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-900155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dnrlp" [498b3bb1-b6d2-4f7f-8e30-a3d54a8ebf0a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dnrlp" [498b3bb1-b6d2-4f7f-8e30-a3d54a8ebf0a] Running
E1205 20:46:15.519607    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/functional-025502/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.010179802s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-900155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m11.389017703s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hkz6w" [1f0ab612-840b-453c-b70a-42cb2fca5554] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.044195943s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-900155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-900155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-88hkv" [31e4e221-5712-47ed-9e08-d408d6f86c7a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-88hkv" [31e4e221-5712-47ed-9e08-d408d6f86c7a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.036406558s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-900155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1205 20:47:44.945620    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/old-k8s-version-683987/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m12.322078891s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ndrxp" [59b8956e-3302-4a00-8219-ddd6cf8b2386] Running
E1205 20:47:55.997262    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
E1205 20:47:56.002490    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
E1205 20:47:56.012653    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
E1205 20:47:56.032883    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
E1205 20:47:56.073113    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
E1205 20:47:56.154158    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
E1205 20:47:56.314481    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
E1205 20:47:56.635375    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
E1205 20:47:57.275810    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
E1205 20:47:58.556376    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.044577624s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-900155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-900155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xm5f6" [239ca125-b96b-4276-b235-360b5e8d5bd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 20:48:01.116925    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
E1205 20:48:06.238212    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-xm5f6" [239ca125-b96b-4276-b235-360b5e8d5bd2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.01594156s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-900155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m30.296949613s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-900155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-900155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7wx8g" [5db8f556-f44f-4e87-8deb-3c220836bd03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7wx8g" [5db8f556-f44f-4e87-8deb-3c220836bd03] Running
E1205 20:48:58.038497    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.015250189s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-900155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1205 20:50:03.208792    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/default-k8s-diff-port-942793/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.039641941s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-900155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-900155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mpdfb" [947d4bd4-11e6-4d62-ad1d-d34a0b3e57eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mpdfb" [947d4bd4-11e6-4d62-ad1d-d34a0b3e57eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.017454699s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-900155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xbk7h" [63c458bc-16f9-48f9-a838-f728efbfa0fb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.029926007s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-900155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-900155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-57lw8" [ee3e3758-f033-4cb9-871e-f6ce56effcc9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-57lw8" [ee3e3758-f033-4cb9-871e-f6ce56effcc9] Running
E1205 20:50:39.840408    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/no-preload-446085/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.027635514s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (49.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-900155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (49.567706993s)
--- PASS: TestNetworkPlugins/group/bridge/Start (49.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-900155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-900155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-900155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-56p5j" [bbc96701-8ab6-4efe-8299-954b959e529e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-56p5j" [bbc96701-8ab6-4efe-8299-954b959e529e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.010034263s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-900155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-900155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (32/315)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.62s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-224607 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-224607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-224607
--- SKIP: TestDownloadOnlyKic (0.62s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-921420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-921420
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
E1205 20:24:14.991855    7773 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/ingress-addon-legacy-867324/client.crt: no such file or directory
panic.go:523: 
----------------------- debugLogs start: kubenet-900155 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-900155

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-900155

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-900155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-900155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-900155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-900155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-900155

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-900155

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-900155

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-900155

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-900155

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-900155" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-900155" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17731-2478/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 05 Dec 2023 20:24:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-415679
contexts:
- context:
cluster: force-systemd-flag-415679
extensions:
- extension:
last-update: Tue, 05 Dec 2023 20:24:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-flag-415679
name: force-systemd-flag-415679
current-context: force-systemd-flag-415679
kind: Config
preferences: {}
users:
- name: force-systemd-flag-415679
user:
client-certificate: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/force-systemd-flag-415679/client.crt
client-key: /home/jenkins/minikube-integration/17731-2478/.minikube/profiles/force-systemd-flag-415679/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-900155

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900155"

                                                
                                                
----------------------- debugLogs end: kubenet-900155 [took: 5.447949005s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-900155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-900155
--- SKIP: TestNetworkPlugins/group/kubenet (5.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-900155 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-900155" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-900155

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-900155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900155"

                                                
                                                
----------------------- debugLogs end: cilium-900155 [took: 5.919279961s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-900155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-900155
--- SKIP: TestNetworkPlugins/group/cilium (6.14s)

                                                
                                    
Copied to clipboard